Abstract Title
The effect of implementation of structure guidelines on the quality of MCQs

Authors

Bahaeldin Hassan
Abdulaziz Alamri
Tauheed Ahmad
Omer Elfaki
Kareem Saleh
Muhammad Abid

Theme

Best Practice in Faculty Development, and in Medical Education Research

INSTITUTION

KING KHALID UNIVERSITY - COLLEGE OF MEDICINE

Summary of Work

 

Objective: the aim of this study was to assess the effect of Multiple Choice Questions (MCQs) structure guidelines and faculty training on the quality of MCQ test in medical schools.

Background: Single best answer multiple choice questions (MCQs) is one of the commonest assessment tools used in medical schools. The aim of our study was to assess the impact of item construction guidelines and faculty training on the quality of MCQs tests.

Methodology: in our institution (college of medicine, King Khalid University), faculty development workshops to write a high quality MCQs items were conducted in a rate of two workshops every semester since academic year 2013. Item –writing guidelines were established and pretest checklist was adopted for every test in the college.. Item analysis parameters were used to assess the quality of the MCQs tests. t test was used for comparison of  Item analysis parameters pre- and post-implementation of item structure guidelines and 0.05 was considered significant

Result and discussion:

Out of five clinical courses which were studied, only one course (surgery) showed significant difference regarding its difficulty index (P value=0.0001). Three courses (community medicine, pediatric and internal medicine) were  significantly  improved in their discrimination ability (P value >0.05). Four out of the five clinical courses were noticed to be improved regarding  the function of distracters. Reliability of all the clinical courses which were studied was increased after implementation of MCQs structure guidelines.

Conclusion:

Faculty training on MCQs writing skills and adoption of item structure guidelines leads to improvement in the quality of MCQs. Faculty training programs and implementation of MCQs guidelines are essential tools for good assessment in medical education.

 

Background

In the assessment system, the use of multiple choice questions (MCQs) is a very common and well accepted method of evaluating diverse characteristics of the professional competencies of medical science education(1). Construction of high quality MCQ items is a very challenging task for the faculty members, especially for those who have never undergone precise and dedicated training (2).

Deviations from MCQ items writing guidelines generally result in undesirable changes of the items' statistical factors like discrimination index (DI), difficulty index (P-value), validity of the examination, and percentage of students' score (3, 4).

Summary of Results

 

Table 1:

Comparison of difficulty index and point biserial before and After implementation of the guidelines

Courses

No. Of
items

Before 
 

After
 

P- value

Before 
 

After
 

P- value

Difficulty
 Index

Difficulty
Index

Point Biserial

Point Biserial

Mean

S.D

Mean

S.D

Mean

S.D

Mean

S.D

Surgery

60

0.52

0.23

0.75

0.23

0.0001

0.134

0.16

0.19

0.18

0.074

Community
medicine

18

0.75

0.31

0.87

0.28

0.209

0.08

0.1

0.23

0.25

0.024

Gynecology 2

60

0.8

0.24

0.83

0.24

0.49

0.16

0.18

0.13

0.19

0.37

Pediatrics 1

60

0.75

0.25

0.72

0.27

0.52

0.13

0.23

0.2

0.28

0.0003

Medicine 1

60

0.52

0.26

0.53

0.27

0.83

0.24

0.21

0.42

0.28

0.0001

Table 2: function of distractors before and after training and guidelines implementation

Courses

No. Of
items

Distractor Index (%)

 

Before

After

 

p-value

3 NFD

2 NFD

1 NFD

0 NFD

3 NFD

2 NFD

1 NFD

0 NFD

Surgery

60

23%

43%

22%

12%

18%

20%

35%

27%

0.825

Community medicine

18

44%

17%

22%

17%

28%

40%

24%

8%

0.81

Gynecology 2

60

43%

28%

22%

7%

35%

33%

23%

9%

0.8

Pediatrics 1

60

31%

37%

20%

12%

40%

27%

20%

13%

0.82

Medicine 1

60

10%

18%

33%

33%

5%

18%

40%

37%

0.85

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Regarding difficulty index ,we sets 0.3 and less as very difficult test,0.3-0.8 as average and 0.8 and more as easy test .out of five courses which were studied, only one course (surgery) showed significant difference (P value=0.0001).

point biserial reflects the ability of the test item to discriminate between high and low scoring students and desirable to be more than 0.2.in our study three courses(community medicine, pediatric and internal medicine) were  significantly improved,this was measured by the number of questions with good discrimination ability (P value >0.05),see table 1.

Four out of the five clinical courses were showed improvement in the function of distracters, the percentage of test items where all distracters were functioning was slightly increased from 12% to 27%, 7% to 9%, 12% to 13% and 33% to 37% in surgery, gynecology, pediatric and internal medicine course respectively.(table 2).

Test reliability which was measured by K20 and range between 0-1, any figure more than 0.75 classify the test as good reliability. Reliability of all the clinical courses which included in our study was increased after implementation of MCQs structure guidelines (see figure 3).

 Overall results revealed the improvement of the quality of MCQs test after faculty training in test item construction and item construction guidelines,  this was evident by the discrimination power and function of distractors .these results was supported by great project in our region which concluded that well-constructed longitudinal faculty development workshop trainings aid to improve the quality of MCQs items writing skills in terms of discriminating and difficulty indices as evident from the Bloom's taxonomy cognitive levels, reduced item writing flaws, and increased functioning distractors(5).

As in our study the importance of faculty training in test item construction and item construction guidelines was reported by Josefowicz and his colleagues in another study (6).

Conclusion

Faculty training on MCQs writing skills and adoption of structure guidelines leads to improvement in the quality of MCQs. Faculty training programs and implementation of item structure guidelines are essential tools for good assessment in medical education.

References

1. Baig M, Ali SK, Ali S, Huda N. Evaluation of multiple choice and short essay question items in basic medical sciences. Pak J Med Sci 2014; 30: 3±6.

2. Abdulghani HM, Ahmad F, Irshad M, Khalil MS, Al-Shaikh GK. Faculty development programs improve the quality of multiple choice questions items' writing. Sci Rep 2015; 5: 9556.

3. Ali SH, Ruit KG. The Impact of item flaws, testing at low cognitive level, and low distractor functioning on multiple-choice question quality. Perspect Med Educ 2015; 4: 244±251.

4. Tarrant M, Ware J. Impact of item-writing flaws in multiple choice questions on student achievement in high-stakes nursing assessments. Med Educ 2008; 42: 198±206

5. Abdulghani HM, Irshad M, Haque S, Ahmad T, Sattar K, Khalil MS (2017) Effectiveness of longitudinal faculty development programs on MCQs items writing skills: A follow-up study. PLoS ONE 12(10): e0185895

6. Josefowicz, R.F., Koeppen, B.M., Case, S.M., Galbraith, R., Swanson, D.B. & Glew, H. (2002).The quality of in-house medical school examinations. Academic Medicine 77(2): 156–6.

Summary of Work
Background
Summary of Results

Reliability comparison  Before and After Implementation of Guidelines

Conclusion
References
Send ePoster Link