Making the Most of Exams: Procedures for Item Analysis

Raymond M. Zurawski, Ph.D.
Associate Professor and Coordinator of Psychology
St. Norbert College
One of the most important (if least appealing) tasks confronting faculty members is the evaluation of student performance. This task requires considerable skill, in part because it presents so many choices. Decisions must be made concerning the method, format, timing, and duration of the evaluative procedures. Once designed, the evaluative procedure must be administered and then scored, interpreted, and graded. Afterwards, feedback must be presented to students. Accomplishing these tasks demands a broad range of cognitive, technical, and interpersonal resources on the part of faculty. But an even more critical task remains, one that perhaps too few faculty undertake with sufficient skill and tenacity: investigating the quality of the evaluative procedure.

Even after an exam, how do we know whether that exam was a good one? It is obvious that any exam can only be as good as the items it comprises, but then what constitutes a good exam item? Our students seem to know, or at least believe they know. But are they correct when they claim that an item was too difficult, too tricky, or too unfair?

Lewis Aiken (1997), the author of a leading textbook on the subject of psychological and educational assessment, contends that a “postmortem” evaluation is just as necessary in classroom testing as it is in medicine. Indeed, just such a postmortem procedure for exams exists--item analysis, a group of procedures for assessing the quality of exam items. The purpose of an item analysis is to improve the quality of an exam by identifying items that are candidates for retention, revision, or removal. More specifically, not only can the item analysis identify both good and deficient items, it can also clarify what concepts the examinees have and have not mastered.

So, what procedures are involved in an item analysis? The specific procedures involved vary, but generally, they fall into one of two broad categories: qualitative and quantitative.

Qualitative Item Analysis

Qualitative item analysis procedures include careful proofreading of the exam prior to its administration for typographical errors, for grammatical cues that might inadvertently tip off examinees to the correct answer, and for the appropriateness of the reading level of the material. Such procedures can also include small group discussions of the quality of the exam and its items with examinees who have already taken the test, or with departmental student assistants, or even experts in the field. Some faculty use a “think-aloud test administration” (cf. Cohen, Swerdlik, & Smith, 1992) in which examinees are asked to express verbally what they are thinking as they respond to each of the items on an exam. This procedure can assist the instructor in determining whether certain students (such as those who performed well or those who performed poorly on a previous exam) misinterpreted particular items, and it can help in determining why students may have misinterpreted a particular item.

Quantitative Item Analysis

In addition to these and other qualitative procedures, a thorough item analysis also includes a number of quantitative procedures. Specifically, three numerical indicators are often derived during an item analysis: item difficulty, item discrimination, and distractor power statistics.

Item Difficulty Index (p)

The item difficulty statistic is an appropriate choice for achievement or aptitude tests when the items are scored dichotomously (i.e., correct vs. incorrect). Thus, it can be derived for true-false, multiple-choice, and matching items, and even for essay items, where the instructor can convert the range of possible point values into the categories “passing” and “failing.”

The item difficulty index, symbolized p, can be computed simply by dividing the number of test takers who answered the item correctly by the total number of students who answered the item. As a proportion, p can range between 0.00, obtained when no examinees answered the item correctly, and 1.00, obtained when all examinees answered the item correctly. Notice that no test item need have only one p value. Not only may the p value vary with each class group that takes the test, an instructor may gain insight by computing the item difficulty level for a number of different subgroups within a class, such as those who did well on the exam overall and those who performed more poorly.

Although the computation of the item difficulty index p is quite straightforward, the interpretation of this statistic is not. To illustrate, consider an item with a difficulty level of 0.20. We do know that 20% of the examinees answered the item correctly, but we cannot be certain why they did so. Does this item difficulty level mean that the item was challenging for all but the best prepared of the examinees? Does it mean that the instructor failed in his or her attempt to teach the concept assessed by the item? Does it mean that the students failed to learn the material? Does it mean that the item was poorly written? To answer these questions, we must rely on other item analysis procedures, both qualitative and quantitative ones.

Item Discrimination Index (D)

Item discrimination analysis deals with the fact that often different test takers will answer a test item in different ways. As such, it addresses questions of considerable interest to most faculty, such as, “does the test item differentiate those who did well on the exam overall from those who did not?” or “does the test item differentiate those who know the material from those who do not?” In a more technical sense then, item discrimination analysis addresses the validity of the items on a test, that is, the extent to which the items tap the attributes they were intended to assess. As with item difficulty, item discrimination analysis involves a family of techniques. Which one to use depends on the type of testing situation and the nature of the items. I’m going to look at only one of those, the item discrimination index, symbolized D. The index parallels the difficulty index in that it can be used whenever items can be scored dichotomously, as correct or incorrect, and hence it is most appropriate for true-false, multiple-choice, and matching items, and for those essay items which the instructor can score as “pass” or “fail.”

We test because we want to find out if students know the material, but all we learn for certain is how they did on the exam we gave them. The item discrimination index tests the test in the hope of keeping the correlation between knowledge and exam performance as close as it can be in an admittedly imperfect system.

The item discrimination index is calculated in the following way:

  1. Divide the group of test takers into two groups, high scoring and low scoring. Ordinarily, this is done by dividing the examinees into those scoring above and those scoring below the median. (Alternatively, one could create groups made up of the top and bottom quintiles or quartiles or even deciles.)
  2. Compute the item difficulty levels separately for the upper (p upper) and lower (p lower) scoring groups.
  3. Subtract the two difficulty levels such that D = p upper- plower.
How is the item discrimination index interpreted? Unlike the item difficulty level p , the item discrimination index can take on negative values and can range between -1.00 and 1.00. Consider the following situation: suppose that overall, half of the examinees answered a particular item correctly, and that all of the examinees who scored above the median on the exam answered the item correctly and all of the examinees who scored below the median answered incorrectly. In such a situation pupper = 1.00 and p lower = 0.00. As such, the value of the item discrimination index D is 1.00 and the item is said to be a perfect positive discriminator. Many would regard this outcome as ideal. It suggests that those who knew the material and were well-prepared passed the item while all others failed it.

Though it’s not as unlikely as winning a million-dollar lottery, finding a perfect positive discriminator on an exam is relatively rare. Most psychometricians would say that items yielding positive discrimination index values of 0.30 and above are quite good discriminators and worthy of retention for future exams.

Finally, notice that the difficulty and discrimination are not independent. If all the students in both the upper and lower levels either pass or fail an item, there’s nothing in the data to indicate whether the item itself was good or not. Indeed, the value of the item discrimination index will be maximized when only half of the test takers overall answer an item correctly; that is, when p = 0.50. Once again, the ideal situation is one in which the half who passed the item were students who all did well on the exam overall.

Does this mean that it is never appropriate to retain items on an exam that are passed by all examinees, or by none of the examinees? Not at all. There are many reasons to include at least some such items. Very easy items can reflect the fact that some relatively straightforward concepts were taught well and mastered by all students. Similarly, an instructor may choose to include some very difficult items on an exam to challenge even the best-prepared students. The instructor should simply be aware that neither of these types of items functions well to make discriminations among those taking the test.

 [material omitted...]

Conclusion

To those concerned about the prospect of extra work involved in item analysis, take heart: item difficulty and discrimination analysis programs are often included in the software used in processing exams answered on Scantron or other optically scannable forms. As such, these analyses can often be performed for you by personnel in your computer services office. You might consider enlisting the aid of your departmental student assistants to help with item distractor analysis, thus providing them with an excellent learning experience. In any case, an item analysis can certainly help determine whether or not the items on your exams were good ones and to determine which items to retain, revise, or replace.

            © Copyright 1996-1999. Published by Oryx Press in conjunction with James Rhem &
                  Associates, Inc. (ISSN 1057-2880)

References:

Amazon small logo Aiken, L.R. (1997). Psychological testing and assessment (9th ed.). Boston, MA: Allyn and Bacon, Inc.

Amazon small logoCohen, R.J., Swerdlik, M.E., & Smith, D.K. (1992). Psychological testing and assessment: An introduction to tests and measurement (2nd ed.). Mountain View, CA: Mayfield Publishing Company.