Qamar Riaz ( Department for Educational Development. )
Naveed Yousuf ( Department for Educational Development. )
Muhammad Shahzad Shamim ( Department for Surgery, Aga Khan University, Karachi, Pakistan. )
January 2023, Volume 73, Issue 0
Research Article
Abstract
Objective: To report the development and validity assessment of a comprehensive model of assessment method comprising written test and multiple mini interviews for selection in a medical internship programme.
Method: The psychometric validation study was conducted at the Aga Khan University, Karachi, and comprised anonymised scores of written admission test, multiple mini interviews and exit written exams for all the interns who completed their internships in 2018 and 2019. Correlation between admission and exit tests, and predictions were assessed. Data was analysed using SPSS 20.
Results: There were 160 interns; 80(50%) each in 2018 and 2019. Mean scores were 68.8±4.40% for written tests and 76.7±4.66% for multiple mini interviews. The mean score for exit examination was 68.1±6.84%. The Cronbach's alpha of scores on the written admission test was 0.82 and 0.88 for the two years, respectively, while for the multiple mini interviews, the corresponding values were 0.81 and 0.94. The written admission and exit tests were moderately correlated (0.44) while the correlations of multiple mini interviews scores with written admissions and exit tests were -0.28 and 0.04, respectively.
Conclusion: The selection process should comprise multiple measures of assessment to ensure the selection of the best candidates.
Keywords: Multiple mini interviews, Predictive validity, Admission test.
(JPMA 73: 37; 2023) DOI: 10.47391/JPMA.5209
Submission completion date: 03-12-2021 — Acceptance date: 29-06-2022
Introduction
The process of selecting the best candidate for a job continues to be a challenge across all professions, and internship is no different. Medical intern is a term used in some countries to describe a fresh medical graduate who has completed medical school and has a medical school degree, but does not yet have a full license to practice medicine unsupervised.1 In Pakistan, medical graduates have to complete a mandatory one-year internship training, popularly known as the House Job, in a hospital recognised by the Pakistan Medical Council (PMC1) to obtain a full license for practicing medicine.1 This one-year training is not only a pathway to obtaining professional licensure, but also an opportunity to gain workplace experience characterised by both power and responsibility while learning the tricks of the trade, often culminating in a lifetime career.2 This makes internship a situation where the stakes are higher than admission to a medical school, thus mandating a selection process that is fair, valid, reliable and efficient.
Unfortunately, while selection methods into the undergraduate medical schools have widely been studied, there is a paucity of literature, especially from Pakistan, that looks into the validity of the selection methods into the postgraduate (PG) programmes, especially the internship programme. In most teaching hospitals, selection into internship is primarily based on the medical school examination scores; at some institutions, this may be coupled with written tests and/or interviews, each of which has its limitations.3,4
The internship programme at the Aga Khan University (AKU) in Karachi is a highly sought-after programme owing to its reputation regarding training facilities that match international standards, and structured programmes with a secure workplace.2 The AKU has developed a comprehensive admission process to identify the candidates who are knowledgeable, possess the essential non-cognitive attributes expected of a medical graduate, such as interpersonal skills, integrity, and professionalism, and who can endure the hardships of the internship training. This selection method has been in place for some years now. A lot of time and effort is invested in developing and administering these admission tests every year.
It is essential to ensure that the decisions based on these tests are valid, reliable, robust, defensible and transparent.5 It is also necessary that the scores based on these assessments predict the future performance of the candidates.4,6
The admission test for the internship at AKU is carried out in two stages — a written clinical problem-solving test, followed by a modified pattern of multiple mini interviews (MMIs).
The written test comprise 100 one-best type multiple choice questions (MCQs) assessing clinical problem-solving ability based on a table of specification (ToS) which is developed keeping in mind the competencies of a medical graduate and includes questions from all undergraduate medical education (UGME) disciplines, including surgery, medicine, obstetrics and gynaecology, emergency medicine, and paediatrics. This test is conducted simultaneously at three locations across Pakistan and is usually taken by approximately 1000 candidates. Those who obtain the minimum passing score of 55% in the written test, qualify for the modified MMIs, the second stage of the admission test.
MMIs are exams like the objective structured clinical examination (OSCE) with multiple stations set up for assessing non-cognitive attributes.7 For the AKU internship admission test, MMI stations are based on tasks that the applicants would most likely encounter during the internship, including interpersonal facets of typical workplace performance. The ToS for the MMIs comprises essential clinical and procedural skills expected of a fresh medical graduate as well as attitudinal skills in line with competencies identified by PMC1. MMI stations (case scenarios) are developed according to the ToS by the internship committee members who are clinicians with specific attributes. Each MMI station is then reviewed by a group of 4-6 faculty members from multiple disciplines facilitated by an educational expert and banked for administration in different years. A total of 10 MMI stations are administered each year of which 3 stations test essential clinical skills and 5 assess applicants' soft skills, such as ethics, communication skills, teamwork, confidentiality, time management, etc. Each attribute is operationally defined before constructing a short scenario for each. These scenarios serve as triggers and are followed by 2-3 questions or tasks. The candidate's performance on each station is assessed against 4-6 items testing the underlying attribute using a 7-point rating scale, ranging 0-6. A 3-point global rating scale is also used to assess the holistic performance of the candidate.
The MMIs for all shortlisted candidates are conducted on a single day in 5 parallel circuits running simultaneously in a single large hall. Each station is assessed and scored by one examiner, thus there are as many examiners as the number of stations. Each station lasts 8 minutes and thus each candidate is assessed for a total of 80 minutes during MMI by 10 different examiners. The assessors on the MMI stations include clinical faculty members and senior residents from various departments, faculty from nursing school, and representatives from the human resource department. The assessors are trained as examiners and most of them have prior experience on OSCEs or MMIs. The assessors are assigned stations based on their expertise and specialty keeping in view the content or attribute being assessed on the station. For example, faculty from the nursing school is assigned the blood pressure measurement station, while faculty members from anaesthesia and critical care units (CCUs) are assigned the basic life support (BLS) station. All the assessors are asked to report one hour before the MMI and are again briefed about the case scenario, format, task being assessed and the rating scales used on the station assigned to them. The 5 assessors who are assigned the same station in five parallel circuits are briefed together to ensure standardisation and are given an opportunity for discussion to enhance mutual understanding and minimising bias during assessment and scoring. The candidates are briefed about the MMI before the start of the circuit. The scenario-based questions or clinical tasks are pasted at each station and the candidates are required to read the case and the task before beginning each station.
Unlike most internship programmes across the country, AKU also conducts an exit exam for interns. At the end of the one-year internship, the graduating interns are once again tested through a written test comprising 100 one-best type MCQs to measure gain in knowledge during the internship training. The ToS for this exit test is the same as that of the entrance test.
The current study was planned to determine the validity of this selection method, and to see if the selection scores were predictive of the future performance of the selected candidates.
Materials and Methods
The psychometric validation study was conducted at the AKU, Karachi, and comprised anonymised scores of written admission test, MMIs and exit written exams for all the interns who completed their internships in 2018 and 2019. Non-probability, purposive sampling was used to select the specific cohorts because the revised scoring key and rating scales were introduced and implemented in 2018. Data was retrieved after taking approval from the institutional ethics review committee.
Data was analysed using SPSS 20. Descriptive statistics, including overall mean, standard deviation (SD), frequencies and percentages, were calculated for scores of written admission test, MMIs and exit examination. The scores obtained on each station were converted into percentage scores for analysis. Reliability was calculated using Cronbach's alpha. All stations were weighted equally. Pearson's correlation and linear regression were used for correlation and prediction. Paired t-test was used to assess the difference between written admissions and exit examinations. For all analyses, p<0.05 was taken as statistically significant.
Results
There were 160 interns; 80(50%) each in 2018 and 2019. Mean scores were 68.8±4.40% for written tests and 76.7±4.66% for MMIs. The mean score for exit examination was 68.1±6.84%. The Cronbach's alpha of scores on the written admission test was 0.82 and 0.88 for the two years, respectively, while for the MMIs, the corresponding values were 0.81 and 0.94, and for the exit examination they were 0.57 and 0.63 (Table-1).
The mean scores for each MMI station ranged from 68.25±9.83% to 80.66±13.85%. Mean scores of all the individual stations were noted for both 2018 and 2019 batches (Table-2).
The written admission and exit tests were moderately correlated (0.44) while the correlations of MMI scores with written admissions and exit tests were -0.28 and 0.04, respectively (Table-3).
There was no significant difference between the internship admissions written test and internship exit examination scores (p>0.05).
Discussion
Healthcare student selection processes have been the subject of much debate and investment over the years.6 While selection methods into the undergraduate medical schools have widely been studied,7 there is a paucity of literature, especially from Pakistan, that looks into the validity of the selection methods into the PG programmes, specifically internships. To our knowledge, the current study is the first to report the use of a comprehensive admission test for entry into medical internship in a Pakistani institution.
Internship is not only a pathway to obtaining professional licensure, but also a highly demanding phase in the life of a medical graduate. It is the starting point of an independent health profession. It is during this time when medical graduates apply what they had learned in medical school while continuing to learn real-life applications of scientific knowledge and taking responsibility for continuity of care of their patients.1 It is therefore essential that the candidates selected for this training have the required competencies to undertake their responsibility as physicians and can endure the hardships of this training period.8 But, like any assessment, it is also important that the assessment process used to select applicants is valid, reliable, robust, defensible and fair.9
In Pakistan, medical graduates are traditionally recruited into the internship programme on the basis of scores of academic achievements, such as grade point average (GPA). The correlation between the preadmission scores and ratings of clinical supervisors has been shown to be low and often insignificant.10 However, these GPA scores can predict clinical performance or performance in licensing exams only if these examinations are valid measures of clinical competence.10
Written tests for cognitive abilities are the most common method employed for the selection of candidates in any PG programme.5 Written tests assessing clinical knowledge, when developed carefully following a blueprint, can reliably assess candidates' knowledge and critical thinking and predict internship performance.3,5 But a medical graduate is expected to possess several other attributes in addition to scientific knowledge to be able to practice independently as a physician. These include clinical and procedural skills, communications skills and professionalism, including but not limited to honesty, integrity, commitment, altruism, empathy and resilience.11 The significance of these non-cognitive and non-medical skills in facilitating medical students' transition to internship and enhancing their readiness for work is well-documented in literature.12,13 While interviews have been identified as the most commonly used method for the identification of these non-cognitive attributes, issues like the unreliability of scores by a single examiner question its utility for the selection of candidates.9
MMIs, originally developed for admission into the medical school,14 have gained considerable global attention over the last decade as a valid and reliable method for the assessment of non-cognitive attributes. Currently, MMIs are being used for admission into undergraduate and PG programmes in medicine, dentistry, nursing, veterinary, and pharmacy, and have been found to be acceptable across multiple cultures.7,15-17 They have been preferred over panel interviews because of their ability to generate objective impressions of the applicant's interpersonal skill, thoughtfulness and general demeanour from multiple assessors.18
The admission process employed for the internship programme at AKU is very robust. The first component comprising the written test assesses the scientific knowledge, while the modified MMI assesses the clinical and soft skills. In Pakistan, MMIs have earlier been reported for selection into the medical school19,20 and residency programme.21 The current study is the first to report its use for selection into an internship programme.
Unlike traditional MMI, we developed stations to assess few essential clinical skills in addition to various professional attributes. The reason behind developing modified MMIs, or including skills in the MMI, is the well-established fact that new graduates and interns lack essential clinical and communication skills which in turn affects their performance as physicians, and progression into postgraduation.12,13 A study also suggested clinical competency assessments using 8-station OSCE as a valid instrument for predicting internship performance.3 The number of stations has been identified as the main determinant of the internal reliability of MMI. Literature shows the use of 4-12 stations in an MMI whereas the reliability is optimised by including 7-12 stations, each with one examiner and 5-15 minutes spent on each station.16,17,22
As opposed to the traditional MMIs reported in the literature23 where only medical school or clinical faculty served as assessors, the AKU includes faculty from the school of nursing and midwifery, community health sciences, and representatives from human resources in addition to the clinical faculty. Some studies support the use of multidisciplinary and inter-professional assessors provided the assessors have been adequately trained on empirically-derived checklists to reduce inter-rater bias and subjectivity.23 The inclusion of human resource personnel also closed the loop for the hiring of healthcare personnel for patient care in the hospital.
At AKU, many quality assurance measures are in place for all the assessments, including those conducted for the internship programme. These include the use of a blueprint, review of the assessment items for content and construct, consensus on key, and pre- and post-hoc analyses.17 Face validity, the extent of appropriateness and relevance of the test and its items as they appear to the test-takers are confirmed by multidisciplinary review with faculty.17 Content validity refers to the extent to which the content of a test represents all of the areas the test claims to assess and is ensured by developing a ToS, or exam blueprint.24 For example, an OSCE that is designed to assess essential clinical skills, but only has history-taking or communication skills stations would have poor content validity. MMIs are generally considered to be more content-valid compared to the traditional panel interviews because of multiple stations/constructs being assessed. Content validity for MMI at AKU is ensured by developing a ToS inclusive of clinical and non-clinical components, as is expected of a medical graduate.24 Additionally, assessors' and simulated patients' training is also ensured for the MMIs. The effect of these quality assurance measures was evident in the form of a reliable and valid admission test.
In the current study, the written admission tests were found to be highly reliable, as measured by Cronbach's alpha, and capable of predicting the future performance of the selected candidates. The reliability of each station of the MMI also had high-reliability scores. The overall reliability of the MMI exam was low. On the basis of literature related to OSCE, this is likely because of the diversity of the constructs being measured at different stations and is considered a quality indicator for OSCE exams.25 It is important to note that there was negative or no correlation between MMI and written admissions and exit tests, indicating different constructs being measured by the MMI and written tests, as was intended. Interestingly, moderately positive correlation between written admissions and exit examination was indicative of similar constructs (scientific or clinical knowledge) being assessed by both of them. No significant difference was observed between the entry and the exit test scores. This can be due to two reasons; firstly, the items on both the tests were at the level expected of final year candidates which they have already achieved, and, secondly, interns are expected to gain more practical skills than knowledge during internship which cannot be assessed through written tests. The programme may consider implementing a skills-based assessment during or towards the end of the internship to assess the improvement in skills.
Conclusion
Internship is a highly demanding phase of medical graduates and requires the candidates to possess a certain skill-set for optimal performance at the workplace despite workload and stresses inherent to the profession, and maximum utilisation of this period as a learning opportunity for preparing them for postgraduation. The use of a comprehensive admission test instead of single measure for selecting the fit-for-job candidates was found to be better. All measures of quality assurance must be considered when developing assessments for selection of the candidates to ensure valid and reliable decisions.
Disclaimer: None.
Conflict of Interest: None.
Source of Funding: None.
References
1. Pakistan Medical Council notification. [Online] [Cited 2022 August 29]. Available from: URL: https://www.pmc.gov.pk/Documents/law/PMC-Act,2020.pdf.
2. Tariq M, Shamim MS, Subhan A, Saeed S, Rehman Z, Virani B, et al. Re-structuring a University Hospital’s Internship Program Using Kern’s Six-Step Model of Instructional Design. MedEdPublish. 2017; 6:11-20.
3. Lee M, Vermillion M. Comparative values of medical school assessments in the prediction of internship performance. Med Teach. 2018; 40:1287-92. doi: 10.1080/0142159X.2018.1430353.
4. Gardner AK, Cavanaugh KJ, Willis RE, Dunkin BJ. Can better selection tools help us achieve our diversity goals in postgraduate medical education? Comparing use of USMLE Step 1 scores and situational judgment tests at 7 surgical residencies. Acad Med. 2020; 95:751-7. doi: 10.1097/ACM.0000000000003092.
5. Kelly ME, Patterson F, O’Flynn S, Mulligan J, Murphy AW. A systematic review of stakeholder views of selection methods for medical schools admission. BMC Med Educ. 2018; 18:1-26. doi: 10.1186/s12909-018-1235-x.
6. Boulet JR, Durning SJ. What we measure and what we should measure in medical education. Med Educ. 2019; 53:86-94. doi: 10.1111/medu.13652.
7. Jerant A, Henderson MC, Griffin E, Hall TR, Kelly CJ, Peterson EM, et al. Do admissions multiple mini-interview and traditional interview scores predict subsequent academic performance? A study of five California medical schools. Acad Med. 2019; 94:388-95. doi: 10.1097/ACM.0000000000002440.
8. Patterson F, Ferguson E, Zibarras L. Selection into medical education and training. Understanding Medical Education: Evidence, Theory, and Practice 3rd ed. [Online] [Cited 2018 October 05]. Available from: URL: https://doi.org/10.1002/9781119373780.ch26
9. Roberts C, Khanna P, Rigby L, Bartle E, Llewellyn A, Gustavs J, et al. Utility of selection methods for specialist medical training: a BEME (best evidence medical education) systematic review: BEME guide no. 45. Med Teach. 2018; 40:3-19. doi: 10.1080/0142159X.2017.1367375.
10. Prideaux D, Roberts C, Eva K, Centeno A, Mccrorie P, Mcmanus C, et al. Assessment for selection for the health care professions and specialty training: consensus statement and recommendations from the Ottawa 2010 Conference. Med Teach. 2011; 33:215-23. doi: 10.3109/0142159X.2011.551560.
11. Wolf SJ, Lockspeiser TM, Gong J, Guiton G. Identification of foundational non-clinical attributes necessary for successful transition to residency: a modified Delphi study with experienced medical educators. BMC Med Educ. 2018; 18:1-8. doi: 10.1186/s12909-018-1247-6.
12. Monrouxe LV, Bullock A, Gormley G, Kaufhold K, Kelly N, Roberts CE, et al. New graduate doctors’ preparedness for practice: a multistakeholder, multicentre narrative study. BMJ open. 2018; 8:e023146. doi: 10.1136/bmjopen-2018-023146.
13. Padley J, Boyd S, Jones A, Walters L. Transitioning from university to postgraduate medical training: A narrative review of work readiness of medical graduates. Health Sci Rep. 2021; 4:e270. doi: 10.1002/hsr2.270.
14. Eva KW, Macala C, Fleming B. Twelve tips for constructing a multiple mini-interview. Med Teach. 2019; 41:510-6. doi: 10.1080/0142159X.2018.1429586.
15. Knorr M, Hissbach J. Multiple mini Interviews: Same concept, different approaches. Med Educ. 2014; 48:1157-75. doi: 10.1111/medu.12535.
16. Yusoff MS. Multiple Mini Interview as an admission tool in higher education: Insights from a systematic review. J Taibah Univ Med Sci. 2019; 14:203-40. doi: 10.1016/j.jtumed.2019.03.006.
17. Rees EL, Hawarden AW, Dent G, Hays R, Bates J, Hassell AB. Evidence regarding the utility of multiple mini-interview (MMI) for selection to undergraduate health programs: A BEME systematic review: BEME Guide No. 37. Med Teach. 2016; 38:443-55. doi:10.3109/0142159X.2016.1158799.
18. Callwood A, Jeevaratnam K, Kotronoulas G, Schneider A, Lewis L, Nadarajah VD. Personal domains assessed in multiple mini interviews (MMIs) for healthcare student selection: A narrativesynthesis systematic review. Nurse Educ Today. 2018; 64:56-64. doi: 10.1016/j.nedt.2018.01.016.
19. Ali S, Hashmi MS, Shoaib SH. Stake Holders' Opinion of Multiple Mini Interviews (MMI) for Selection of Medical Students in Pakistan. Biomed. 2020; 36:199-205. DOI:https://doi.org/10.53350/pjmhs2115123294
20. Haider SI, Bari MF, Ijaz S. Using multiple mini-interviews for students’ admissions in Pakistan: a pilot study. Adv Med Educ Pract. 2020; 11:179-85. doi: 10.2147/AMEP.S246285.
21. Andrades M, Bhanji S, Kausar S, Majeed F, Pinjani S. Multiple miniinterviews (MMI) and semistructured interviews for the selection of family medicine residents: a comparative analysis. Int Rcholar Res Notices. 2014; 2014.
22. Ali S, Hashmi MS, Umair M, Beg MA, Huda N. Multiple miniinterviews: current perspectives on utility and limitations. AdvMed Educ Pract. 2019; 10:1031. doi: 10.2147/AMEP.S181332.
23. Prediger S, Fürstenberg S, Berberat PO, Kadmon M, Harendza S.Interprofessional assessment of medical students’ competences with an instrument suitable for physicians and nurses. BMC Med Educ. 2019; 19:1-7. doi: 10.1186/s12909-019-1473-6.
24. Ismail MA, Pa MN, Mohammad JA, Yusoff MS. seven steps toConstruct an Assessment Blueprint: A practical guide. Educ Med J. 2020; 12:71-80. doi:10.21315/eimj2020.12.1.8
25. Pell G, Fuller R, Homer M, Roberts T. How to measure the quality of the OSCE: a review of metrics–AMEE guide no. 49. Med Teach. 2010; 32:802-11. doi: 10.3109/0142159X.2010.507716.
Related Articles
Journal of the Pakistan Medical Association has agreed to receive and publish manuscripts in accordance with the principles of the following committees: