October 2021, Volume 71, Issue 10

Short Communication

Evaluating the impact of faculty development programme initiative: Are we really improving skills in MCQ writing?

Faiza Kiran  ( Army Medical College, National University of Medical Sciences, Rawalpindi, Pakistan )
Rukhsana Ayub  ( National University of Medical Sciences, Rawalpindi, Pakistan. )
Ayesha Rauf  ( National University of Medical Sciences, Rawalpindi, Pakistan. )
Khadija Qamar  ( Army Medical College, National University of Medical Sciences, Rawalpindi. )

Abstract

A series of seven workshops were conducted in 2018, at the National University of Medical Sciences and its affiliated institutes, to evaluate the effectiveness of a three-hour workshop in improving faculty competence in developing high quality test items. Participants’ satisfaction was evaluated with a post-workshop feedback questionnaire. A self-made structured questionnaire was required to be filled as a pre-test and post-test assessment. Paired t-test was applied and difference in mean scores of responses was evaluated. A total of 141 faculty members were trained. The training session led to high satisfaction in all elements of workshop, significant improvements in boosting confidence in item writing skills (p=0.001), recognising parts of MCQs (p=0.001), identifying item writing flaws (p=0.001), and levels of Millers pyramid and blooms taxonomy (p=0.001). Training sessions of short duration are effective in improving faculty competence in writing high quality test items, if hands-on experience is built-in and effective feedback is provided.

Keywords: Faculty development programme, satisfaction, feedback, knowledge, written, educational assessment.

DOI: https://doi.org/10.47391/JPMA.1207

 

Introduction

 

For faculty of medical colleges to experience self-transformation and avoid obsolescence, it is imperative to run training programmes for continuous medical education. It increases quality of learning and assessment, thereby benefitting a major workforce of nascent physicians trained by them.1 The challenge lies in going beyond attendance and satisfaction; knowing how a well-organised training session can modify the cognition and behaviour of the faculty, which in turn will affect the future physician workforce.2 Faculty developers, by conducting immediate programme assessment, open the doors of reflection, self-evaluation and thereby, improvement.3 These steps are vital for the faculty to steer clear of professional isolation or attrition.4

With the move towards Competency Based Medical Education (CBME), institutions are spending time, money, and resources on faculty training in acceleration.5 Though literature is replete with studies based on faculty development initiatives, evidence, of the impact of the programme on the faculty, which should be evident by observable change in their behaviour and practices.6 The focus of these studies is mainly on immediate satisfaction and improvement in knowledge, with no reporting on gain in participants’ skills. These training programmes lack proper structure, mostly occur in private medical colleges, and do not practice programme evaluation.7 Additionally, while planning coaching sessions, the foreground for most of the trainers is skill transfer or enhancement, often ignoring participants’ motivations and values.3

The rate-limiting step in evolution towards CBME is a model programme for faculty development practicing hands-on experience and prompt feedback to improve skills.8,9 The most popular model for evaluating effectiveness of faculty training programmes is Kirkpatrick’s model which delineates four levels of training outcomes, namely, reaction, learning, behavioural change, and results via organisational performance.10

In CBME, learning and assessment must be congruent.5 This steers toward incorporating quality assurance in assessment process to provide validity evidence.11 A study in a public sector college in Pakistan, using MCQs as an assessment tool, revealed that our faculty is not trained to write high quality MCQs and their item writing skills are poor as many flaws were identified when items were reviewed.12 In addition, test development is costly, requires increased time, energy, and effort in item authoring, item moderation, test administration, and post hoc analysis. It has been estimated that construction of a good multiple-choice item costs $1,000, approximately.13 This concludes that in faculty development programmes, more emphasis must be paid on improving the quality of item writing (MCQs) which provides content validity evidence.11 Results of another study proved that effective feedback given to MCQ authors improved quality of test items and reduced item writing flaws.8

Faculty training in assessment must be a longitudinal programme and not a random, one-time activity as effective assessment is not an innate skill, rather it requires an ongoing training, practice, and feedback.14 Most faculty members acknowledged the importance of training, but these training sessions should be designed keeping in mind its feasibility for the faculty. Part-time sessions, rather than full day engaging ones, are more acceptable to them.15

There are many medical and dental colleges operating under the umbrella of our institution. Review of MCQs for item bank generation and for exams is done on a regular basis. It was observed that the questions submitted by most faculty members were below par, even though training sessions were conducted frequently within these institutions. Also, faculty members were doing the tedious job of evaluating their MCQs themselves. As mentioned earlier, the majority of the training models are ineffective as they do not go beyond the Kirkpatrick level 1 and lack ‘success factors’ for faculty development, namely, incorporation of active learning and feedback, effective relationship with colleagues and diverse teaching approaches.7,16

All the above mentioned informal and unplanned needs assessment drove us to plan a comprehensive faculty development programme consisting of continuous series of three-hour training sessions on writing high quality MCQs. The objectives were to evaluate faculty competence in developing high quality test items, to report on their degree of satisfaction and explore its immediate impact on the cognitive and affective domain of the participants.

 

Methods and Results

 

After taking permission from the institutional ethics committee, a descriptive, cross sectional study was conducted at the NUMS constituent and affiliate medical and dental colleges, in 2018, over a five-month period. The learning effect of workshops was evaluated on the same cohort comprising clinicians, bench scientists, consultants, and supervisors of PG trainees. Participants’ satisfaction was evaluated with a post-workshop feedback questionnaire.

Initially, we provided relevant feedback on pre-workshop MCQs. A pre-test was conducted, followed by a group activity on making MCQs. This hands-on activity, in turn, was reinforced by giving immediate feedback on each constructed MCQ. A post-test was taken at the end of the session.

Pre- and post-test questionnaires were structured and consisted of 13 items. The first part (Q1-9) was meant to evaluate the current perceptions of the faculty, SOPs, and current practices within their institution, regarding the quality assurance of test items. All the questions asked reflected quality assurance procedures, attitude of participants and their institution towards assessment. The second part (Q 10-11) assessed their key concepts about basic anatomy of MCQs (each part scored 01 mark; total 05 marks), Blooms taxonomy levels (01 mark for each level; total 06 marks) and Millers competency pyramid (01 mark for each level; total 04 marks). The third part (Q13) was reserved for measuring participants’ knowledge and skills regarding item writing flaws (01 mark for each flaw; total 03 marks). The participants were scored out of total 18 marks in Q10-Q13.

Data was analysed using SPSS 23. Paired t-test was applied to compare the mean scores of the same cohort in their pre-test and post-test.

A total of 141 faculty members attended the workshop in seven cohorts and completed the pre-test; 119 participants completed the satisfaction form, whereas 123 participants completed the post-test.

The participants acknowledged that the said workshop promoted collaborative learning, and declared that feedback helped them in learning. They hoped that knowledge and skill gained will help them improve their future practice of making MCQs. They suggested that this workshop should be offered to all faculty members. Their average ratings on hands-on learning, collaborative tasks, key concepts of assessment, quality and relevance of power point presentation, and future practice determinacy were high (1-2) on a scale of 1-5 (Table-1).

Significant (p=0.001) positive changes were noticed in the faculty members’ knowledge about the basic five parts of an MCQ, levels of Millers pyramid and Blooms taxonomy, confidence in making high quality MCQs and identifying item writing flaws. Before the session, the participants’ knowledge test score was low; 55 (39%) felt confident in making MCQs, 13 (9%) knew the parts of MCQs, 3(2%) identified all flaws in an MCQ and 7 (5%) knew about Millers pyramid and Blooms taxonomy. Post-test showed significant boost in the number of participants who felt confident, i.e. 80 (65%), 73 (59%) identified all parts of MCQs, 20 (16%) identified all flaws in the given MCQ, and 54 (44%) labelled Millers pyramid and Blooms taxonomy levels correctly (Table-2).

The important conclusions evolved from the study are shown in Table-3.

 

Discussion

 

A three-hour workshop on high quality MCQ construction, was evaluated and its impact on knowledge and behaviour of the faculty was studied. Our study is unique as it showed a positive impact in the cognition and behaviour of the participants after a session of shorter duration, as compared to other studies which conducted workshops for seven hours.17 two days18 and seven days.19 Our results showed a significant increase in the mean scores of participants’ satisfaction, knowledge, and confidence.

The programme was evaluated at the first two levels of Kirkpatrick framework. To evaluate reaction (level 1), the participants’ perceptions were taken on feedback questionnaire comprising nine questions, on a Likert scale. Consistent with the findings were a systematic review of faculty development studies by Steinert Y et al,3 which stated that the workshops were generally rated as useful. The results were similar to the study by AlFaris E et al17 where the participants were highly satisfied with all the elements of the workshop. This was quiet heartening and motivating as the satisfaction of participants is a first step in bringing a substantial change within an organisation.20

The second part of our study evaluated learning (level 2) by measuring knowledge and behaviour of the participants. An initial survey by pre-test ratified that most faculty members (n=134) realised the importance of high quality MCQs and were in favour of regular training (n=137), and a few (n=49) approved of making MCQs at application level. Though, 73 (52%) faculty members made a little more than 5 MCQs per month, only 30 (21%) members wrote items which assessed higher cognition level. Only 66 (47%) members declared that their items were checked for quality; of these only 10 (7%) affirmed that their test items were reviewed by educationalists, 38 (27%) relied on senior guidance and peer evaluation to ensure quality and 4 (3%) did this tedious job themselves (Table-2). Anecdotally, lack of proper assessment unit and no availability of full-time educationalists within these institutions might be the cause of these practices.

Our study results compared favourably with studies by AlFaris,17 Abdulghani18 and Naeem,19 all of whom concluded an evident improvement in participants’ knowledge scores (learning) after the training intervention. This was congruent with other studies where positive changes in teachers’ knowledge, attitudes, and skills following participation in a faculty development activity were observed.21 The results showed post workshop increase in confidence of the participants in their MCQs making skill. This suggested that even a shorter duration, well-designed rigorous training session, can be successful. This was in sheer harmony with the study by Pandachuck and Dellinges22,23 in which the participants supported the fact that brief training sessions were helpful in improving their skills. Our study results closely related with a study of Abdulghani et al,17 who conducted a two-day workshop in which the faculty analysed MCQ items for Bloom’s cognitive levels and item writing. This reinforced the viewpoint of Notzer that hands-on training sessions with learner centred approach and effective feedback to participants, given promptly at appropriate time, led to successful faculty development programme.24

 

Limitations

This was the first in a series of studies that will follow the cohort and determined the impact of reinforcements and repeated feedbacks. Forthcoming studies are aimed at the impact of serial training opportunities for improving the faculty members' skills on writing quality MCQs and the role of appropriate feedback.

 

Conclusions

 

Our study proved that well-designed, shorter faculty development sessions, which included feedback as an important element of training, had a positive impact on knowledge and behaviour of the participants. It can be deduced that shorter sessions are less resource-intensive, more convenient, encourage faculty involvement and improve competence of the faculty. They can be repeated more often to reinforce knowledge retention and skill refinement and a large number of people can be trained in a short span of time.

 

Disclaimer: None.

Conflict of Interest: None.

Funding Disclosure: None.

 

References

 

1.      Camblin LD, Steger JA. Rethinking faculty development. High Educ 2000; 39: 1-8.

2.      Fink LD. Innovative ways of assessing faculty development. New Dir Teach Learn 2013; 133: 47-59.

3.      Steinert Y, Mann K, Anderson B, Barnett BM, Centeno A, Naismith L, et al. A systematic review of faculty development initiatives designed to enhance teaching effectiveness: A 10-year update: BEME Guide No. 40. Med Teach 2016; 38: 769-86.

4.      O’Keefe M, Lecouteur A, Miller J, McGowan U. The Colleague Development Program: a multi-disciplinary programme of peer observation partnerships. Med Teach 2009; 31: 1060–5.

5.      Chacko TV. Moving towards competency-based education. Challenges and the way forward. Arch Med Health Sci 2014; 2: 247-25.

6.      Lee SS, Dong C, Yeo SP, Gwee MC, Samarasekera DD. Impact of faculty development programmes for positive behavioural changes among teachers: a case study. Korean J Med Educ 2018; 30: 11-22.

7.      Nadeem N, Yasmin R. Faculty Development Practices in Medical Colleges of Lahore, Pakistan. Pak J Med Health Sci 2018; 12: 66-72

8.      Danish KF, Khan RA. Role of effective feedback in Multiple Choice Questions (MCQs) designing for faculty development. J Rawalpindi Med Coll 2011; 15: 98-100.

9.      Little M. Preparing nursing students to be health educators: personal knowing through performance and feedback workshops. J Nurs Educ 2006; 45: 131-5.

10.    Bates R. A critical analysis of evaluation practice: the Kirkpatrick model and the principle of beneficence. Eval Program Plann 2004; 27: 341-7.

11.    Downing, S, Yudkowsky R. Assessment in health professions education. London: Routledge; 2009

12.    Iqbal MZ, Irum S, Yousaf MS. Multiple choice questions; Developed by the faculty of a public sector medical college. Prof Med J 2017; 24: 1409-14.

13.    Fitzgerald, C. Risk management: Calculating the bottom line of developing a certification or licensure exam. [Online] 2005 [Cited 2010 June 13]. Available from: URL: http://www.caveon.com/articles/fitzgerald3.htm.

14.    Holmboe ES, Ward DS, Reznick RK, Katsufrakis PJ, Leslie KM, Patel VL, et al. Faculty development in assessment: the missing link in competency-based medical education. Acad Med 2011; 86: 460-7.

15.    Diaz V, Garrett PB, Kinley ER, Moore JF, Schwartz CM, Kohrman P, et al. Faculty development for the 21st century. EDUCAUSE Rev. 2009; 44: 46-55.

16.    Amin Z, Burdick WP, Supe A, Singh T. Relevance of the Flexner Report to contemporary medical education in South Asia. Acad Med 2010; 85: 333-9.

17.    AlFaris E, Naeem N, Irfan F, Qureshi R, Saad H, Abdulghani HM, et al. A one-day dental faculty workshop in writing multiple-choice questions: an impact evaluation. J Dent Educ 2015; 79: 1305-13.

18.    Abdulghani HM, Ahmad F, Irshad M, Khalil MS, Al-Shaikh GK, Syed S, et al. Faculty development programs improve the quality of Multiple-Choice Questions items' writing. Sci Rep 2015; 5: 1-7.

19.    Naeem N, Van der Vleuten CPM, AlFaris EA. Faculty development on item writing substantially improves item quality. Adv Health Sci Educ Theory Pract 2012; 17: 369-76.

20.    Belfield C, Thomas H, Bullock A. Measuring effec¬tiveness for best evidence medical education: a discussion. Med Teach 2001; 23: 164-70.

21.    Markert RJ, O’Neill SC, Bhatia SC. Using a quasi-experi¬mental research design to assess knowledge in continuing medical education programmes. J Contin Educ Health Prof 2003; 23: 157-61.

22.    Pandachuck K, Harley D, Cook D. Effectiveness of a brief workshop designed to improve teaching performance at the University of Alberta. Acad Med 2004; 79: 798-804.

23.    Dellinges MA, Curtis DA. Will a short training session improve multiple-choice item-writing quality by dental school faculty? A pilot study. J Dent Educ 2017; 81: 948-55.

24.    Notzer N, Abramovitz R. Can brief workshops improve clinical instruction? Med Educ 2008; 42: 152-6.

 

Journal of the Pakistan Medical Association has agreed to receive and publish manuscripts in accordance with the principles of the following committees: