By Author
  By Title
  By Keywords

January-A 2021, Volume 71, Issue 1

Short Communication

Evaluating the effectiveness of ‘MCQ development workshop using cognitive model framework: A pre-post study

Rahila Ali  ( Department for Educational Development, Aga Khan University Hospital, Karachi, Pakistan )
Amber Shamim Sultan  ( Aga Khan University Hopspital, Karachi, Pakistan )
Nida Zahid  ( The Aga Khan University Hospital, Karachi. )

Abstract

A workshop on MCQ development using cognitive model framework was conducted for health educators from Aga Khan University (AKU) and other academic institutions. The aim was to develop the skill of preparing MCQs for assessing higher cognitive levels. A pre-post study was conducted, participant satisfaction was evaluated and pre–post test scores were used to assess learning capability of the workshop participants. Out of the 19 who attended the workshop, 16 participated in the pre- and post-tests and were included in the study through convenience sampling. The total duration of the study was six months. There was a significant difference in the overall pre-post test scores of the participants with a mean difference of -4.176 ± 4.83 (p-value < 0.05). A significant difference was observed in the mean pre-post test scores of junior faculty (-6.350± 4.5829; p-value = 0.02). The mean pre-test scores of junior faculty were significantly lower 4.950±2.83 as compared to the senior faculty 10.417±1.56 (p-value= 0.001).

Active participation in faculty development workshops may lead to enhancing skills for preparing one-best MCQs based on international guidelines.

Keywords: multiple choice questions, faculty development

DOI: https://doi.org/10.47391/JPMA.1068

 

Introduction

 

Assessment of health professionals’ education heavily relies on multiple choice questions (MCQ). Peer review and review by educational experts ensures quality of one best MCQs. However, during the MCQ review it was observed that most of the questions developed by the faculty assess the knowledge at recall level and fail to follow the guidelines laid by the National Board of Medical Examiners (NBME).1

MCQs are considered an appropriate tool to not only measure a candidate’s ability to recall and comprehend but can also be designed to measure higher order thinking skills such as application and analysis.2 Well-constructed MCQs test reasoning at higher levels and, therefore, can discriminate between high and low-achievers.3,4 The construction of MCQs to assess higher levels of comprehension and analysis is a challenging task which requires both sufficient knowledge and training.

A multiple choice question comprises a stem which may perhaps be a clinical scenario. It is followed by a number of options which are related to the stem, and only one option may be correct (one correct type MCQ) or the best (one-best type of MCQ). To be awarded marks, the student’s task is to identify the most appropriate option.5

MCQs with flaws lead to errors in interpretation of test scores and may undesirably affect the pass rate of the students. To ensure reliability and validity of tests, items constructed must be free of errors.6 Flawed items have an overall impact on the performance of students. The structural flawlessness of MCQs can be improved by the level of experience.6

Maximum errors are attributed to the MCQs’ structure, followed by inaccuracies in the stem and vignette of the MCQs.7 It is reported that items developed by those faculty members who have not undergone faculty development training lack quality. This is an evidence to the value of faculty development in generating quality items.8

It has been observed that the questions being used in examinations held at institutional levels are not following international guidelines for developing one best MCQs, which, however, may be significantly improved by conducting formal workshop on MCQ development. Internationally accepted guidelines and a multidisciplinary review lead to improved quality of items.9

The objectives of this study were to explore the perceptions of workshop participants regarding achievement of workshop objectives, content covered, and facilitation skills, and to determine if participation in the workshop helped them gain new knowledge and skills.

 

Methods

 

It was a pre-post study conducted at the Aga Khan University (AKU) in which faculty from AKU (Medical College, School of Nursing and Allied Health), as well as other academic institutions offering undergraduate and postgraduate health science education, participated in a workshop on how to construct effective one-best MCQs. Both senior (assistant professor and above)and junior faculty (below assistant professor) attended the workshop. This evaluation is based on Kirkpatrick’s model to analyse and evaluate the impact of educational training at level 1and 2.10

Gagnes nine events of instruction were used to design the workshop: seek attention, inform learners of objectives, recall prior learning, present learning material, provide support to learner, perform individually or in a group and provide feedback, measure performance and promote retention and transfer.11

The instructional approach of the workshop was based on flip class methodology. Prior to the face-to-face session of the workshop, the chapter on One Best type MCQ construction from the NBME guidelines was shared with all the registered participants. Questions to stimulate prior knowledge and check basic understanding of the concept was assessed through the pre-workshop readings which were posted on an online discussion forum created on Padlet. The purpose of this discussion forum was to assess the participants’ knowledge regarding basic rules on constructing One-Best MCQs and to engage students outside the workshop.

To determine if the participants acquired new knowledge, a pre-test was held just before starting the three-hour workshop and a post-test was held on completion of the workshop. In the pre-post test the participants were asked to categorise MCQs according to the Blooms level of learning being assessed by the MCQ and identify flaws based on general rules for one best answer items as given in the NBME guidelines.

Total number of participants in the workshop were 19; convenience sampling was done and data of only 16 participants was included in the quantitative analysis. All candidates who sat the pre- and post-test and participated in the entire workshop were included in the study. Participants who did not attend the entire workshop and did not complete both the pre-post test were excluded from the study. Data to measure acquisition of new knowledge was collected by using pre/post-test questionnaire and perceptions of the workshop participants regarding achievement of workshop objectives, content covered and facilitation was acquired by the workshop evaluation form.

Ethical approval was sought from the Aga Khan Ethical Review Committee. Anonymity and confidentiality of the data were maintained.

Data was analysed using SPSS version 19; descriptive analysis was applied to compute mean±standard deviation percentages. Pre- and post-test scores were compared using independent sample and Paired sample t-test.

 

Results

 

Both junior 5 (45.5%) and senior faculty 9 (27.3%) participated in the workshop, information for 5 participants was missing. From the medical college 5 (27.2%) of participants were from basic sciences, 6 (31.8%) were from clinical sciences, 4 (22.7%) were nurses and 2 (9%) were from the department of medical education; whereas information for 5 participants was missing.

Figure 1a shows the rating of the participants on workshop objectives and content; about 13 (72%) rated it as excellent, 4 (25%) good and 2 (3%) rated satisfactory. Figure 1b depicts the rating of participants of the knowledge gained and understanding of the content of the workshop, about 10(54%) rated it as excellent, 8 (45%) good and 1 (1%) satisfactory. Similarly, figure 1c, indicates the rating of participants for presentations at workshop; about 14 (74%) rated it as excellent, 4 (24%) good and 1 (2%) satisfactory.

We observed a significant difference in the overall pre-post-test scores of the participants with mean difference of (-4.176±4.83). A significant difference was observed in the mean pre-post-test scores of junior faculty (-6.35±4.58;  p-value = 0.02). For senior faculty no significant difference was noted in their mean pre-post-test scores  (p-value= 0.66).

The mean pre-test scores of junior faculty were significantly lower 4.95±2.83 as compared to the senior faculty 10.417±1.56 (p-value 0.001). However, there was no significant difference in post-test scores of junior (11.30±5.30) vs senior faculty (11.08±3.78) (p-value>0.05).

No significant difference was noted in the mean post-test scores of junior faculty and senior faculty at a p-value of <0.05.

 

Conclusion

 

Faculty development workshop improved one best MCQs writing skills based on NBME guidelines. A significant difference was observed in the mean pre-post test scores of junior faculty. Such workshops are conducted to improve the MCQ writing skills of faculty members.

 

Disclaimer: None.

Conflict of interest: None.

Funding Disclosure: None.

 

References

 

1.      Paniagua M, Swygert K. Constructing written test questions for the basic and clinical sciences. Philadelphia, PA: National Board Med Exam. 2016.

2.      Khadjooi K, Rostami K, Ishaq S. How to use Gagne's model of instructional design in teaching psychomotor skills. Gastroenterol Hepatol Bed Bench. Summer 2011; 4:116-9.

3.      Downing SM. Assessment of knowledge with written test forms.In: Downing SM, eds.  International handbook of research in medical education. USA: Springer, 2002; pp-647-72.

4.      Schuwirth LW, Van Der Vleuten CP. Different written assessment methods: what can be said about their strengths and weaknesses? Med Educ. 2004; 38:974-9.

5.      Palmer E, Devitt P. Constructing multiple choice questions as a method for learning. Ann Acad Med Singap. 2006; 35:604-8.

6.      Collins J. Writing multiple-choice questions for continuing medical education activities and self-assessment modules. Radiographic. 2006; 26:543-51.

7.      Danish KF. Role of effective feed back in Multiple Choice Questions (MCQs) designing for faculty development.  [Online] 2010 [Cited 2020 October 26.] Available from: URL: http://www.journalrmc.com/index.php/JRMC/article/view/665

8.      Naeem N, van der Vleuten C, Alfaris EA. Faculty development on item writing substantially improves item quality. Adv Health Sci Educ Theory Pract. 2012; 17:369-76.

9.      Jozefowicz RF, Koeppen BM, Case S, Galbraith R, Swanson D, Glew RH. The quality of in-house medical school examinations. Acad Med. 2002; 77:156-61.

10.    Praslova L. Adaptation of Kirkpatrick’s four level model of training criteria to assessment of learning outcomes and program evaluation in higher education. Educ Asse Eval Acc. 2010; 22:215-25.

11.    Driscoll MP, Driscoll MP. Psychology of learning for instruction 3rd ed. Harlow: Pearson Education, 2013; pp- 469.

 

Journal of the Pakistan Medical Association has agreed to receive and publish manuscripts in accordance with the principles of the following committees: