+ All Categories
Home > Documents > Refine Test Items for Accurate - CEConnection...measure learning outcomes, and to create active...

Refine Test Items for Accurate - CEConnection...measure learning outcomes, and to create active...

Date post: 23-Sep-2020
Category:
Upload: others
View: 1 times
Download: 0 times
Share this document with a friend
7
Refine Test Items for Accurate Measurement Six Valuable Tips Karen Siroky, MSN, RN-BC ƒ Bette Case Di Leonardi, PhD, RN-BC Nursing Professional Development (NPD) specialists frequently design test items to assess competence, to measure learning outcomes, and to create active learning experiences. This article presents six valuable tips for improving test items and using test results to strengthen validity of measurement. NPD specialists can readily apply these tips and examples to measure knowledge with greater accuracy. N ursing Professional Development (NPD) specialists continuously devise and improve upon approaches to assess and validate competency. Competency embraces the cognitive, affective, and psychomotor domains of learning and performance. Tests function as one indicator in competency management models by measuring the cogni- tive domain: a clinician’s knowledge base or competence. Although nursing examinations have begun to introduce alternatives to multiple-choice items, the multiple-choice item remains prevalent (Sutherland, Schwartz, & Dickison, 2012). Multiple-choice test items measure competence/knowledge and not competency/performance. However, test results make a more valid contribution to competency assessment/ validation when test developers sharpen their focus on the knowledge pertinent for practice and frame test items in a practice context. Despite a careful test planning process, flaws in test item construction can distract from accurate measurement and threaten validity (Haladyna, Downing, & Rodriguez, 2002; McDonald, 2013; Oermann & Gaberson, 2013). Flaws in test item construction draw the test taker’s focus away from the point of the question by creating ‘‘noise.’’ In this sense of the word, noise includes any features irrelevant to the intended measurement that make a question more difficult to decipher and answer correctly. NPD specialists need to eliminate noise to the greatest extent possible to assure that their test items precisely measure the knowledge/competence they intend to measure. NPD specialists frequently design test items to assess competence. In the authors’ organization, the com- petency model includes knowledge/competence assessment examinations along with skills checklists, letters of reference, background checks, and ongoing performance appraisals to document competence and competency of nurses in a wide range of specialties and allied health personnel. NPD specialists also develop test items to measure learn- ing outcomes and to create active learning experiences. When designing posttests and interactive learning methods, learning objectives guide the selection and allocation of questions. Games in live sessions and interactive features in online courses use questions to engage the learner. Well- written questions give the learner practice in applying course content to realistic practice situations, that is, to put the objec- tives of the course into action. When constructing tests to measure the knowledge base/ competence pertinent to a particular clinical role, NPD spe- cialists analyze performance expectations for the role, consult with subject matter experts (Toth, 2011), and construct tests of sufficient length to help assure accurate measurement. These processes help NPD specialists to represent practice accurately, increasing the validity of the measurement. The world of measurement uses the term validity to de- scribe the degree to which a measurement actually measures the intended characteristic (Bannigan & Watson, 2009). The credentialing world considers the related concepts of integ- rity, authenticity, and fidelity to explore how well certification, recertification, and other credentialing processes assure com- petence and continuing competence. To measure competence accurately, multiple-choice items must avoid threats to validity. This article exposes some common flaws in multiple-choice items that interfere with accurate measurement and suggests remedies in the form of six tips to improve test items (see Figure 1). Tip #1: Create a Practice Context Place the question in the practice context to set the stage for nursing action. A solid practice context supports validity, but it is important to limit the length of the contextual story. Karen Siroky, MSN, RN-BC, is Senior Clinical Director, AMN Healthcare, San Diego, California. Bette Case Di Leonardi, PhD, RN-BC, is Independent Consultant in Education and Competency Management, Chicago, Illinois. The authors have disclosed that they have no significant relationship with, or financial interest in, any commercial companies pertaining to this article. ADDRESS FOR CORRESPONDENCE: Bette Case Di Leonardi, PhD, RN-BC, 56 West Schiller Street, Chicago, IL (e<mail: [email protected]). DOI: 10.1097/NND.0000000000000123 3.0 ANCC Contact Hours 2 www.jnpdonline.com January/February 2015 JNPD Journal for Nurses in Professional Development & Volume 31, Number 1, 2Y8 & Copyright B 2015 Wolters Kluwer Health, Inc. All rights reserved. Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved.
Transcript
Page 1: Refine Test Items for Accurate - CEConnection...measure learning outcomes, and to create active learning experiences. This article presents six valuable tips for improving test items

Refine Test Items for AccurateMeasurementSix Valuable Tips

Karen Siroky, MSN, RN-BC ƒ Bette Case Di Leonardi, PhD, RN-BC

Nursing Professional Development (NPD) specialists

frequently design test items to assess competence, to

measure learning outcomes, and to create active learning

experiences. This article presents six valuable tips for

improving test items and using test results to strengthen

validity of measurement. NPD specialists can readily

apply these tips and examples to measure knowledge with

greater accuracy.

Nursing Professional Development (NPD) specialistscontinuously devise and improveuponapproachesto assess and validate competency. Competency

embraces the cognitive, affective, andpsychomotor domainsof learning and performance. Tests function as one indicatorin competencymanagementmodels bymeasuring the cogni-tive domain: a clinician’s knowledge base or competence.

Althoughnursing examinations havebegun to introducealternatives tomultiple-choice items, themultiple-choice itemremains prevalent (Sutherland, Schwartz, &Dickison, 2012).Multiple-choice test itemsmeasure competence/knowledgeand not competency/performance. However, test resultsmake amore valid contribution to competency assessment/validation when test developers sharpen their focus on theknowledge pertinent for practice and frame test items in apractice context.

Despite a careful test planning process, flaws in test itemconstruction can distract from accurate measurement andthreaten validity (Haladyna, Downing, & Rodriguez, 2002;McDonald, 2013; Oermann&Gaberson, 2013). Flaws in testitem construction draw the test taker’s focus away from thepoint of the question by creating ‘‘noise.’’ In this sense of theword, noise includes any features irrelevant to the intendedmeasurement thatmake aquestionmoredifficult to decipher

and answer correctly. NPD specialists need to eliminatenoise to the greatest extent possible to assure that their testitems precisely measure the knowledge/competence theyintend tomeasure.NPDspecialists frequentlydesign test itemsto assess competence. In the authors’ organization, the com-petencymodel includes knowledge/competence assessmentexaminations alongwith skills checklists, letters of reference,background checks, andongoingperformance appraisals todocument competence and competency of nurses in awiderange of specialties and allied health personnel.

NPD specialists also develop test items tomeasure learn-ing outcomes and to create active learning experiences.Whendesigning posttests and interactive learningmethods,learning objectives guide the selection and allocation ofquestions. Games in live sessions and interactive features inonline courses use questions to engage the learner. Well-writtenquestions give the learner practice in applying coursecontent to realistic practice situations, that is, to put the objec-tives of the course into action.

When constructing tests tomeasure the knowledge base/competence pertinent to a particular clinical role, NPD spe-cialists analyzeperformanceexpectations for the role, consultwith subjectmatter experts (Toth, 2011), and construct testsof sufficient length to help assure accurate measurement.These processes help NPD specialists to represent practiceaccurately, increasing the validity of the measurement.

The world of measurement uses the term validity to de-scribe the degree towhich ameasurement actuallymeasuresthe intended characteristic (Bannigan &Watson, 2009). Thecredentialingworld considers the related concepts of integ-rity, authenticity, and fidelity to explore howwell certification,recertification, andother credentialingprocesses assure com-petence and continuing competence.

Tomeasure competenceaccurately,multiple-choice itemsmust avoid threats to validity. This article exposes somecommon flaws in multiple-choice items that interfere withaccuratemeasurement and suggests remedies in the form ofsix tips to improve test items (see Figure 1).

Tip #1: Create a Practice ContextPlace the question in the practice context to set the stage fornursing action. A solid practice context supports validity,but it is important to limit the length of the contextual story.

Karen Siroky, MSN, RN-BC, is Senior Clinical Director, AMNHealthcare,San Diego, California.

Bette Case Di Leonardi, PhD, RN-BC, is Independent Consultant inEducation and Competency Management, Chicago, Illinois.

The authors have disclosed that they have no significant relationshipwith,or financial interest in, any commercial companies pertaining to this article.

ADDRESS FORCORRESPONDENCE:Bette CaseDi Leonardi, PhD, RN-BC,56West Schiller Street, Chicago, IL (e<mail: [email protected]).

DOI: 10.1097/NND.0000000000000123

3.0 ANCCContactHours

2 www.jnpdonline.com January/February 2015

JNPD Journal for Nurses in Professional Development & Volume 31, Number 1, 2Y8 & Copyright B 2015 Wolters Kluwer Health, Inc. All rights reserved.

Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved.

Page 2: Refine Test Items for Accurate - CEConnection...measure learning outcomes, and to create active learning experiences. This article presents six valuable tips for improving test items

Too much story adds reading load. Reading load refers totext in excess ofwhat is needed to clearly express andmea-sure as intended. Too little fails to set the stage for nursingaction.At times, itmaybeacceptable to includea small amountof irrelevant information if the item is intended to measurethe ability to sort out the significant findings. The test takermust use the information in the stem to answer the item. Itis inappropriate to develop a situation involving a patientwho has diabetes and then ask the question, ‘‘What is thenormal range for fasting blood glucose?’’ A better use of thesituation is to ask the test taker to analyze information pre-sented about signs and symptoms and determine a courseof action (see example in Figure 2).

When framing situations for test items, think through‘‘What does the nurse do?’’ in the situation. Whenever pos-sible, begin eachoptionwith a verb that stateswhat thenursedoes. The competent nurse doesmore than recognize an ab-normal labvalue. In fact,most clinical settings include referencerangeswith lab reports. And so, the nursemust focus on pat-terns in findings, recognizewhy certain values are important,anddecidewhat todoabout abnormal findings. For example,instead of asking aquestion about the usual platelet count fora leukemia patient, askwhy this value is important andwhatthe nurse does about it. The correct answer is not a lab valuebut might rather be increased risk for bleeding, implementnursing orders/standards to prevent injury.

The patient teaching context might also provide a prac-tice application of a fact or principle, for example, explaininghow a pacemaker works or the purpose of oxygen therapyfor apatientwhohashadamyocardial infarction (seeexamplein Figure 3). However, putting facts and principles into laylanguage for a patient teaching test itemmay create optionsof unwieldy length. To create more succinct options, phrasethe stem ‘‘Youwill explain in termsunderstandable tohim that:’’

Itemwriters often find it very easy towrite items that testfacts and principles. But to make a valid connection be-tween knowledge/competence and practice/competency,

the item writer must answer the question, ‘‘How does thenurse use this fact or principle in making a judgment?’’ Theanswer will suggest an item written at a higher cognitivelevel. Raise the bar by asking the test taker to exercise judg-ment, not simply recall a fact. An item that asks the test takerto interpret information provided and choose what action totake will usually be a higher cognitive level item, unless thetest taker knows the correct answer because it is a familiarprotocol and not a matter of professional judgment.

Remember to keep the practice context in focus by usingcommon clinical mistakes and misunderstandings as dis-tractors (incorrect options). Common mistakes make plausibledistractors andmayhelp topreventmistakeswhena test takerreceives feedback on test performance. Avoid humorous ornonsensical distractors. Humor andnonsensemay insult anddistract the serious test taker. Meaningless distractors wastean opportunity tomeasure because the test taker will easilyrule them out.

Tip #2: Focus the QuestionAwell-written stemposes aquestionormakes an incompletestatement. Awell-written stem, andnot the options, containsthe central idea (Haladyna et al., 2002). Toomuch verbiageinterferes with validity, because it detracts from the centralpoint and creates reading load. The first words of the stemset the context, such as the patient, the situation, and the testtaker’s role. The last words tell the test taker what to look forin the options. For example, conclude a calculation itemwith‘‘You will administer how many milliliters?’’ followed byoptions, each ofwhich is a number ofmilliliters (see examplein Figure 4).

Tip #3: Design One Clear Correct ChoiceSupported by RationaleSometimes test takers can successfully defend an answerother than the intended correct answer. Prevent this situa-tion by locating current evidence-based rationale to support

FIGURE 1 Six tips to improve test items.

Journal for Nurses in Professional Development www.jnpdonline.com 3

Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved.

Page 3: Refine Test Items for Accurate - CEConnection...measure learning outcomes, and to create active learning experiences. This article presents six valuable tips for improving test items

the correct option and the incorrectness of distractors.Doingso may lead to refining the options. The rationale and cita-tion serve as learning resources for test takers.

Tip #4: Avoid Noisemakers: All-of-the-Above,None-of-the-Above, NegativesAll-of-the-above andnone-of-the-abovedonot fit grammat-ically as the answer to a question or as a phrase to completean incomplete sentence. If the itemaskswhat action thenursewill take, all-of-the-above is not an answer to that question.In addition,when the test taker knows thatmore thanoneoffour options are correct, he knows that all-of-the-above isthe only possibility. Conversely, if he knows that one of theoptions is incorrect, hewill rule out all-of-the above. In eithercase, the use of all-of-the-above as an option hasmade oneof the incorrect options useless as a distractor for technicalreasons that have nothing to dowithmeasuring knowledge.Test takersmay gravitate to the all-of-the-above optionwhenthey do not know the answer, figuring that it is a good guess.This is especially likely with test takers who have had plenty

of previous experience with all-of-the-above as the correctanswer.

As an alternative, create succinct, two- or three-part op-tions in each distractor. Ifmore parts are essential, place theone or two that everyone knows in the stem. For example,an itemmight test the knowledge of morphine, oxygen, ni-troglycerin, and aspirin (MONA) as interventions to treatmyocardial infarction. To decrease reading load for the testtaker andeliminate for the itemwriter the challengeof comingupwith three incorrect four-part options, the itemwritermightplace oneormoreparts ofMONA in the stem. For example,an item might read ‘‘For the patient who is experiencing amyocardial infarction (MI), immediate interventions includeaspirin and:’’ Because the use of aspirin to treat MI iswidelypublicized, most test takers probably know that aspirin iscorrect andwould choose only an option that contained as-pirin.As another example, actions that apply inmost situationssuch as ‘‘follow policy and procedure’’ or ‘‘document yourobservations’’ might be included in the stem. Offering suchobvious correct answers does not help the test writer sort

FIGURE 2 Tip #1: Create a practice context, not a story.

4 www.jnpdonline.com January/February 2015

Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved.

Page 4: Refine Test Items for Accurate - CEConnection...measure learning outcomes, and to create active learning experiences. This article presents six valuable tips for improving test items

those who know the material from those who do not. Thetest development term for this sorting is discrimination.

Negatives of all kinds (such as none-of-the-above, doublenegatives in the item, all except, not) create noise. Test takers,especially if anxious or hurried, often misread negatives aspositives and so answer incorrectly. Generally, it is more im-portant to focus the test taker’s attention on the correct action,rather than the incorrect action (see example in Figure 5). Inaddition, a negative requires a more complex thought pro-cess, which introduces noise and distracts from measuringwhat the itemwas intended tomeasure. Some recommenduseofnone-of-the-aboveas anoption incalculationquestions,inwhich the test takermust first perform the calculation be-fore searching the options for the correct answer (McDonald,2013).

Tip #5: Consider Three-Choice Multiple-ChoiceNursing examinations such as the licensing examinationsand specialty certification examinations consist largely offour-choicemultiple-choice items.NCLEX-RNA includes somealternative item types. Some certification examinations haveintroducedother formats. Academic programs also use alter-natives to multiple-choice test items. However, the four-choice multiple-choice item predominates.

The literature (Edwards,Arthur,&Bruce, 2012; Rodriguez,2005;Tarrant&Ware,2012) suggests that theuseof three-choicerather than four-choice multiple-choice items detracts littlefrom validity and reliability and has decided advantages. It

eliminates the difficulty of creating a fourth plausible option,greatly increasing efficiency of test development. Often itemanalysis reveals that, in a four-choice item, feworno test takersselect one particular option.With less reading load per item,test takers can respond to three-choice itemsmore quickly.Therefore, the test can present more items in the same timeperiod.A longer test, onewithmore items,offers theadvantageof increasing validity and reliability.

Tip #6: Use Analysis of Test Results toImprove TestsTest itemsneed regular updating to stay alignedwith currentevidence-based practice. In addition, technical improvementsguided by analysis of test results strengthen validity. Four as-pects of analysis of test results are especially useful:

n pass rate,n difficulty,n discrimination, andn distractor analysis.AlthoughNPDspecialistsmightwelcome strict rules about

using analysis of test results, they cannot escape the need toapply professional judgment in using the analysis. The an-alyzed data provide a source of knowledge, not a strict rule.Effectiveuseof analysis of results requires thewisdomofpro-fessional judgment. Analysis of results tells test developerswhat to investigate, not what to do.

The discussion presented here is simplified to provideinsight into the useof results. Most knowledge/competence

FIGURE 3 Tip #1: Create a practice context, raise the bar.

Journal for Nurses in Professional Development www.jnpdonline.com 5

Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved.

Page 5: Refine Test Items for Accurate - CEConnection...measure learning outcomes, and to create active learning experiences. This article presents six valuable tips for improving test items

assessment tests and continuing education course postteststhatNPDspecialists create aremastery tests. Inmastery testing,the expectation is that most test takers will pass the test byobtaining a predetermined minimum passing score. A fre-quency distributionof scores is heavily skewed toward higherscores. Some of the published guidelines for interpretationof analysis of results apply to test results that conform to abellcurve rather than resultswithmanyhigh scores and fewscoresbelow thepassing standard.Mastery testing usually yields re-sults of a highpass rate andmany items answered correctly bymost test takers. This situation influences the statistics usedinanalysisof results. For completediscussionandmorepreciseinformation about computation, see McDonald (2013) andOermann and Gaberson (2013).

Pass rate equals the percentage of test takerswho passedthe test. Adiscussionofmethods for setting apassingor cutoffscore is beyond the scopeof this article. BecauseNPD testingis competence and safety related, tests used in NPD oftenrequire a percentage correct of at least 80% and occasionally100%. InNPD, test takerswhodonot passmay receive reme-diation to assure that they know the correct answer. Theremediationprocessmay reveal faults in aparticular test item,

such as ambiguity or perhaps two correct answers to an itemintended to have only one correct answer.

In continuingeducationposttesting, educatorsmay toleratea lower pass rate. Sometimes participants take the posttestmore thanonce toobtainpassing scores, but their initial scoresmay be included in the pass rate calculation.

If the pass rate is 100%, one may question whether thetest may be too easy. Perhaps some items need to presentgreater challenge. Perhaps the topics are too basic. Perhaps100% is essential because of assure safe practice. Similarly,a low pass rate requires investigation.

Difficulty equals the percentage of test takers who an-swered an item correctly. Difficulty may be calculated foreach item and for the test overall as an average of all theindividual itemdifficulties. Paradoxically, 100%or 1.0 difficultymeans that all test takers answered correctly: the higher thedifficulty value, theeasier the item for this groupof test takers.For example, if difficulty = 0.75, 75%of test takers answeredcorrectly.

As a useful rule of thumb, investigate any item answeredcorrectly by fewer than 75% of test takers. Investigate doesnot dictate whether to revise, eliminate, or retain. It simply

FIGURE 4 Tip #2: Focus the question.

6 www.jnpdonline.com January/February 2015

Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved.

Page 6: Refine Test Items for Accurate - CEConnection...measure learning outcomes, and to create active learning experiences. This article presents six valuable tips for improving test items

means to use professional judgment in exploring the poorperformance and adjust the item, the learning experience, orsimply enforce the expectation.

Discrimination equals the difference between the numberof high scorers/passers who answered an item correctlyand the number of low scorers/nonpassers who answeredan item correctly. The desired result is that asmany ormorehigh scorers/passers answeredcorrectly thandid lowscorers/non-passers.Whenmore low scorers/non-passers thanhighscorers/passers answer correctly, it suggests that somethingis amisswith the item.This situation is callednegativediscrim-ination, because thenumber of high scorers/passersminus thenumber of low scorers/non-passers yields a negative result.Perhaps the item is ambiguous, or perhaps advanced knowl-edge leads a test taker away from the intended correct answer.

Distractor analysis equals the number and status (highor low scorers) of test takers who choose each incorrect op-tion. As noted previously, distractors must be plausible andshould attract test takers who do not know the correct an-swer. When few or no test takers choose a particulardistractor, it suggests a need to make the distractor morechallenging. Related to discrimination, it signals a problemwith a distractor if low scorers answer correctly but morehigh scorers choose aparticular distractor. ForNPDpurposes,computationofdistractor analysis is rarely indicated.However,

it is useful to seewhether test takers are choosing distractorsand if distractors might be improved.

Whenmost test takers pass, therewill bemanydistractorsthat are chosen by few or no test takers. Nevertheless, thetest developer needs to remain alert for opportunities to im-prove distractors.

A number of commercial software and Web-based pro-gramsareavailable toassistwith analysis of results. In addition,Internet sources explain how to create spreadsheet formulaeto analyze test results.With anunderstandingof themeaningand significance of these four aspects of analysis of results,theNPDprofessional canperforma simple analysis of at leastselected items, even without a sophisticated program.

CONCLUSIONCareful planning and attention to the tips this article pres-ents contribute to item validity. But carelessness can stillsabotage accurate measurement. Take the final importantsteps and carefully proofread, edit, format, and correct errorsin grammar, punctuation, capitalization, and spelling (Haladynaet al., 2002).

Valid, effective test items are essential tools in the NPDspecialist’s competence assessment/validation tool kit. Validmeasurement builds credibility and confidence in the NPDspecialist’s expertise in documenting competence, both for

FIGURE 5 Tip #4: Avoid noisemakers: all-of-the-above, none-of-the-above, negatives.

Journal for Nurses in Professional Development www.jnpdonline.com 7

Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved.

Page 7: Refine Test Items for Accurate - CEConnection...measure learning outcomes, and to create active learning experiences. This article presents six valuable tips for improving test items

stakeholders in the organization and for the clinicians whotake these tests. Strengthening test development skills aidstheNPD specialist in demonstrating the value of NPD in theorganization’s competency management model.

ReferencesBannigan, K.,&Watson, R. (2009). Reliability and validity in a nutshell.

Journal of Clinical Nursing, 18, 3237Y3243.Edwards, B.D., Arthur,W., &Bruce, L. (2012). The three-option format

for knowledge and ability multiple choice tests: A case for why itshould bemore commonly used in personnel testing. InternationalJournal of Selection and Assessment, 20(1), 65Y81.

Haladyna, T. M., Downing, S. M., & Rodriguez, M. C. (2002). A reviewofmultiple-choice item-writing guidelines for classroomassessment.Applied Measurement in Education, 15, 309Y334.

McDonald, M. E. (2013). The nurse educator’s guide to assessinglearning outcomes (3rd ed.). Burlington, MA: Jones & BartlettLearning.

Oermann,M., &Gaberson, K. (2013). Evaluation and testing in nursingeducation (4th ed.). New York, NY: Springer Publishing Company.

Rodriguez, M. (2005). Three options are optimal for multiple-choiceitems: A meta- analysis of 80 years of research. In EducationalMeasurement: Issues and Practice, 24(2),3Y13.

Sutherland, K., Schwartz, J., & Dickison, P. (2012). Best practices forwriting test items. Journal of Nursing Regulation, 3(2), 35Y39.

Tarrant, M., & Ware, J. (2012). A comparison of the psychometricproperties of three- and four-choice multiple-choice questions innursing assessments. Nurse Education Today, 30, 539Y543.

Toth, J. (2011).Assessment tool formedical-surgical nursing (MED-SURGBKAT)B and implications for in-service educators and managers.Nursing Forum, 46(2), 110Y116.

For more than 22 additional continuing education articles related to professional development,go to NursingCenter.com\CE.

Notice: Online CE Testing Only Coming in 2015!

Startingwith the first issueof 2015, the tests for CE articleswill appear only in the online version of the issue,and all tests must be completed online at www.nursingcenter.com/ce/JNSD. Simply select the CE articleyou are interested in. Both the article and the test are available there. Youwill no longer have the option tomail or fax in the test.

If you haven’t done so already, you will want to create a user account for yourself in Nursing Center’sCEConnection - it’s free to do so! Look for the Login link in the upper right hand corner of the screen.

8 www.jnpdonline.com January/February 2015

Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved.


Recommended