April 2021, Volume 71, Issue 4

Research Article

Quality assurance procedures in assessment — a descriptive study of medical colleges in Pakistan

Nighat Murad  ( Pakistan Health Research Council, Islamabad, Pakistan. )
Syed Moyn Aly  ( Institute of Medical Education, Jinnah Sindh Medical University, Karachi, Pakistan. )


Objective: To identify the quality assurance procedures being implemented in the assessment system of medical colleges in Pakistan.

Methods: The cross-sectional study was conducted from March 2015 to December 2017 in medical training institutions recognised by the Pakistan Medical and Dental Council across Pakistan and Azad Jammu and Kashmir, and comprised individuals designated by the respective institutional administrations. The mixed method technique was employed using a semi-structured questionnaire. Data was analysed using SPSS 21.

Results: Of the 49 institutions, 20(41%) were in the public sector and 29(59%) were in the private sector. Overall, 35(71.4%) institutions followed a written assessment policy provided by the affiliated university, 9(18%) never did so, 22(44.8 %) had content experts checking if the questions matched the objectives, 42(85.7%) took strict steps to prevent cheating in exams, and 26(53.1%) analysed theory exam statistically. Discrimination index, difficulty index, reliability, and point biserial were 14(28.6%), 13(26.5%), 12(24.4%), and 7(14.3%) of the medical colleges respectively. Only 12(24.5%) institutions provided written feedback on the results, and 15(30.6%) conducted annual internal audit.

Conclusion: General issues related to quality assurance procedures in assessments were found to be in place in majority of the colleges. However, a large proportion did not have them.

Keywords: Quality assurance, Assessment process, Validity, Reliability, Medical colleges. (JPMA 71: 1113; 2021)

DOI: https://doi.org/10.47391/JPMA.831




Over the last couple of decades many new medical colleges in public and private sectors have been established across Pakistan. Increasing number of health schools is expected to have a positive effect on the availability of healthcare providers, but the situation is alarming in terms of the quality of the services offered by the students after graduating from these colleges.1 Good assessment is essential as it guarantees that the doctors upon graduation are skilful enough to provide good healthcare.2 Quality assurance (QA) in assessment is vital to confirm that the understanding of the results and their usage is accurate and defensible.

Accreditation is a position indicating that an institution has achieved and is continuing to sustain high level of standards set by an accrediting body. Pakistan Medical Commission (PMC) formerly known as Pakistan Medical and Dental Council (PMDC) is the only accrediting body in Pakistan for undergraduate medical and dental institutions. In 2011, PMC, in collaboration with the Higher Education Commission (HEC), was able to launch a competency-based curriculum with the focus on improving and bring the medical education on par with international standards. A detailed policy for examinations and the conditions under which students were to be assessed was also identified. Based on the importance of quality assurance in higher education, the HEC has provided a set of guidelines for high-quality student assessments.3

In 2004, the World Federation for Medical Education (WFME) published standards which have been adopted by many medical institutions for quality assurance and accreditation processes.4 Educational Commission for Foreign Medical Graduates (ECFMG) has pronounced that by the year 2023 it will be awarding certificates to only those physicians who will have completed their undergraduate studies from a properly accredited institution that is meeting the criteria set by Liaison Committee on Medical Education (LCME) or WFME.5 Various frameworks are used to analyse the quality of the assessment system and involves implementing a range of activities before, during and after the administration of the assessment process. Due to scarcity of evidence regarding maintaining excellence in the examination system, it is needed to appraise the evaluation system of different health schools in Pakistan. The current study was planned to identify the quality assurance procedures that are being implemented in assessment system of medical colleges in Pakistan.


Subjects and Methods


The cross-sectional observational study was conducted from March 2015 to December 2017 in medical colleges across Pakistan, including Azad Jammu and Kashmir (AJK), using sequential explanatory mixed method technique.6 Using non-probability convenient sampling method, public and private medical institutions were selected from the list PMC-recognised medical colleges7 having a current valid status, established for more >5 years, and from where at least one batch of medical students had passed out with both pre-clinical and clinical students enrolled at the time of data collection. Medical colleges under litigation or where admission was stopped by the PMC, and colleges where the medical education department did not exist were excluded.

The sample was considered appropriate as a sample size >30 produces a significant impact.8 Also, with this size, the assumptions of normality were satisfied and findings could be generalised to a larger population.9 One faculty member of either gender from the selected medical colleges associated with medical education/assessment unit, having worked in the same medical college for more than one year, designated as senior lecturer or above, recommended by the head of institution / college competent authority, and granted permission by their respective institutions to be interviewed, was enrolled.

The Deans / senior management of the respective colleges were informed about the objectives of the study, and formal permission was sought from the administration of the medical colleges. Informed verbal consent was taken from each participant enrolled who had the right to withdraw at any point during data-collection. Also, they had the right not to respond to particular question(s). Anonymity and confidentiality was maintained throughout the process.

Data was collected using a semi-structured questionnaire comprising open and close-ended questions. The pilot testing of the questionnaire was done at the COMSATS University, Islamabad. Data was collected from six faculty members of different public and private medical colleges selected using convenience sampling. The medical colleges chosen for pilot testing were not selected for the main study. Minor adjustments were made in the questionnaire on the basis of the findings of the pilot study.

Approval for the main study was obtained from the ethics review committee of DOW University of Health Sciences (DUHS), Karachi.

Data was analysed using SPSS 21. All the close ended variables were expressed as frequencies and percentages. Open-ended variables were coded for identifying themes by experts. Repeated patterns and features of the data pertinent to the research questions were identified. Individual responses of participants were mentioned as direct quotes where appropriate.




Of the 93 medical institutions, 49(53%) were included; 20(41%) public-sector; 29(59%) private-sector; 21(43%) from Punjab; 15(31%) Khyber Pakhtunkhwa (KP); 10(20%) Sindh; 2(4%) Balochistan; and 1(2%) AJK. Among the participants, 35(71.4%) responded that their institution followed a written assessment policy provided by the affiliated university, and 29(59.2%) each were aware of the information given in the assessment policy, and that the roles and responsibilities of all the stakeholders were clearly defined and the same was the case with guidelines for examiners / assessors in the written assessment policy. Of the total, 9(18%) respondents stated that their institution never followed a written assessment policy, and 12(24.5%) did not know if their institution was following any policy.

Regarding the assessment policy, 29(59.2%) respondents said roles and responsibilities of all stakeholders were clearly defined, 29(59.1%) said theory was checked using proper tools, 32(65.4%) said their colleges evaluated clinical skills and 21(42.8%) covered components like communication skill, team work, attitude and behaviour by applying proper tools. Overall, 38(77.5%) colleges rarely or never checked that proper tools were used to assess student's competencies (Table-1).

response about checking validity and reliability of the written examination suggested 22(44.8%) institutions had content checked by experts to ensure the questions matched the objectives, while 17(34.7%) would never / rarely do that. Further, 19(38.7%) institutions had medical educationists checking if the written examination had any flaw, while 30(61.2%) colleges trained faculty on writing high-quality items.

In 40(81.6%) colleges, the examiner's checklists for assessments with marks distribution were prepared before exams, and 28 (57.2%) institutions would hold a pre-exam meeting to appraise the examiners / assessors on the examination guidelines (Table-2).

Regarding quality assurance procedures in assessment, 42(85.7%) participants were of the view that strict steps were being taken to prevent cheating, 48(97.9%) reported that in their institutions the exam would start on time, all 49(100%) participants responded that in their college the exam would end on time, and it was ensured by 44(89.8%) institutions that no student got any extra time. In 28(57.2%) colleges, a cut-off for invigilators student ratio of one invigilator per 10-20 students was observed, 5(10.2%) occasionally maintained this ratio, while 14(28.6%) rarely or never observed such a practice.

In terms of post-exam quality assurance in assessment, 26(53.1%) participants responded that the theory exam was analysed statistically, 4(8.2%) would do this analysis occasionally, while 17(34.7%) said theory exams were rarely or never analysed. Moreover, the discrimination index was reported to be calculated in 14(28.6%) colleges, the difficulty index in 13(26.5%), while in 27(55.1%) colleges it was rarely or never done. Representatives of 7(14.3%) medical colleges confirmed that point biserial was calculated in their institutions, while 28(57.2%) said that items were saved in the question bank post-analysis (Table-3).

Further, 24(49%) participants responded that asking for feedback from candidates was rarely or never practised in their institutions, 13(26.5%) said feedback was demanded from the candidates, and 10(20.4%) said the candidates' feedback was shared with the stakeholders. At 22(44.9%) colleges, regular meetings were held to discuss validation activities. Benchmarks for graduate competencies against a standard were available in 14(28.5%) colleges, in 20(40.8%) colleges these benchmarks were rarely or never available. Also, 20(44.9%) participants said standard benchmarks for the assessors were rarely or never available, and in 10(20.4%) colleges they were available.

Internal audit of the assessment process was rarely or never performed in 21(43%) colleges, in 15(30.6%) colleges, it was conducted annually, and 5(10.2%) colleges did that occasionally. Moreover, 12(24.5%) colleges used proper checklists for reporting internal audit, while 19(38.8%) never used them. In 10(20.4%) institutions a summary of the audit report was available, while it was not available in 20(40.8%).

No assessor had been trained in the preceding 12 months in 20(41%) institutions, while 3(6%) institutions were not able to provide details. While discussing evaluation of training and assessment practices, one respondent said: "evaluation is only possible after training through workshop".

Respondents from 13(26.5%) institutions were "not aware of any barriers or challenges being faced in assessment process"; 2(4%) respondents found attitude difficult to assess, quoting reasons as "no tools available for attitude assessment and feedback"; and respondents from 27(55%) institutions stated: "no process of auditing the assessment process existed". A respondent from a private-sector medical college said: "no formal audit; and department of health professional education only reviews assessments".




Effective and credible assessment systems have been developed worldwide and by the higher education regulatory bodies of Pakistan also. In the present study 71.4% participants responded that their institution followed a written assessment policy being provided by the affiliated university. Moreover, 59.2% were aware of the information given in the assessment policy and a similar proportion responded that the roles and responsibilities of all the relevant stakeholders were clearly defined and that guidelines for examiners / assessors were clearly outlined in the written assessment policy. Similar results were shown in a study conducted at Pennsylvania State University.10 Assessment policies are developed at the university level, but faculty members are responsible for implementing them. Thus, a clear understanding of the whole assessment process would help them to practice it in the best manner so that its benefits are maximised.11

Our study revealed that a low proportion of medical colleges strictly ensured that theory and clinical skills were checked using proper tools. During a study12 conducted in a medical college in Pakistan, having an integrated modular system, it was revealed that 76% of the multiple choice questions (MCQs) and 83.3% of the short-essay questions (SEQs) designed to assess higher cognition levels were actually assessing recall of knowledge. A study13 in Canada reported that inappropriate tools were utilised for assessing clinical / procedural skills. In another review14 of 56 articles carried out to identify tools used for workplace / clinically-based assessments during summative exams, it was discovered that same tools were used in all summative exams for all academic years and no specific tools were designed for assessing clinical competence of the students at final stages of medical training.14 For effective medical education a detailed blue print of the course must be developed prior to examination, specifying course objectives, teaching methods and the assessment tools that shall be used to assess the competencies.15

In the present study, 22(44.8%) participants reported that in their institutions, content expert checked if the questions matched the objectives. However, 17(34.7%) reported that it was never / rarely done. Lack of evidence for validity of the exam process and reliability of results were other substantial barriers to quality assurance procedures in assessment. Similar finding was reported earlier.16 This questions the authenticity of the decisions made regularly at such institutions.

The current study has identified that suitable feedback for assessment was not given to the students. Similar findings have been identified by another study.17 This is despite the fact that students perceived it as an important factor in their learning and wanted regular feedback on their performance.17 Another study stated that HEC had undertaken a systematic process of implementation of reforms in which quality feedback system has been identified as the key challenge.18

The major gap the current study highlighted was that the benchmarks were not readily available for assessors against a standard. This finding is endorsed by another study.19 Quality assurance procedures will foster benchmarking move towards a programmatic approach with a dominant focus on formative purposes and improvement.19

The current study found that no formal auditing or definitive methods were available for evaluation of training and assessment practice. Auditing plays a vital role in refining the assessment procedures. A study in India20 said quality in medical education demanded transparent selection procedures, well-established curriculum, self-evaluation and academic audits conducted by institutions. External examiners were invited for assessment of final exams in India for quality assurance purpose that partly addressed self-evaluation and monitoring of the quality of the education provided.20

In terms of making a real difference to policy, the researchers / faculty members of medical colleges need to be much more proactive.21 The present study showed that there were few institutions acquiring training for assessments and that "evaluation is only credible after training through workshop". A study conducted in Pakistan, reflected that majority of the faculty members received no formal training in medical education, and that 75% respondents thought that training the faculty was the primary responsibility of the institution or department.22 The current study found that majority of the colleges had no process of auditing the assessment process. Audit process helps in identifying the barriers and provide understanding for improving and implementing the action plans of the healthcare process.23 Medical institutions and departments must periodically analyse data and conduct internal audit of the assessment process in order to formulate appropriate policies.




Majority faculty members were not aware of information regarding assessment policy. Lack of a feedback mechanism, limited student involvement, lack of validity, reliability and fairness of written exams was noted. Benchmarks for assessors against a standard were not readily available. No formal auditing was done.


Disclaimer: The text is based on a thesis for Masters in Health Professional Education (MHPE).

Conflict of Interest: None.

Source of Funding: None.




1.      Awan AK. Trading-off quality for quantity: Mushrooming of Medical Institutions and Quality of Medical Education in Pakistan. Int J Pathol. 2016; 14:1-6.

2.      Epstein RM. Assessment in Medical Education. N Eng J Med. 2007; 356:387-96.

3.      Batool Z QR. Quality Assurance Manual for Higher Education in Pakistan [Online] 2014 [Cited 2016 December 05]. Available from: URL: http://www.greenwich.edu.pk /Quality_Assurance_Manual.pdf.

4.      Basic Medical Education. WFME Global Standards for Quality Improvement. [Online] 2012 [Cited 2016 December 10]. Available from: URL: http://wfme.org/standards/bme/78-new-version-2012-quality-improvement-in-basic-medical-education-english.

5.      Medical School Accreditation Requirement for ECFMG Certification. Educational Commission for Foreign Medical Graduates [Online] 2010 [Cited 2016 December 07]. Available from: URL: https://www.ecfmg.org/about/initiatives-accreditation-requirement.html

6.      Ivankova NV, Creswell JW, Stick SL. Using mixed-methods sequential explanatory design: From theory to practice. Field Methods. 2006; 18:3-20.

7.      Pakistan Medical & Dental Council. Recognized Medical Dental Colleges in Pakistan [Online] 2014 [Cited 2016 December 03]. Available from: URL: http://www.pmdc.org.pk/AboutUs/RecognizedMedicalDentalColleges/tabid/109/Default.aspx.

8.      Marshall B, Cardon P, Poddar A, Fontenot R. Does sample size matter in qualitative research?: A review of qualitative interviews in IS research. J Comp Info Sys. 2013; 54:11-22.

9.      Thomson SB. "Grounded theory-sample size and validity”  [Online] [Cited 2016 November 25]. Available from: URL: http://www.buseco.monash.edu.au/research/studentdocs/mgt.pdf.

10.    Terenzini PTRR, Cox BE, Quaye LBR, McIntosh KL. Survey of Faculty Activities and Perceptions: Institutional Report and User’s Guide [Online] [Cited 2016 December 27]. Available from: URL: https://www.uwyo.edu/accreditation/_files/docs/parsing_faculty_rep.pdf

11.    MacDonald SK, Williams LM, Lazowski RA, Horst SJ, Barron KE. Faculty Attitudes toward General Education Assessment: A Qualitative Study about Their Motivation. Res Pract Assess. 2014; 9:74-90.

12.    Baig M, Ali SK, Ali S, Huda N. Evaluation of multiple choice and short essay question items in basic medical sciences. Pak J Med Sci. 2014; 30:3-6.

13.    Touchie C, Murto HS, Varpio L. Teaching and assessing procedural skills: a qualitative study. BMC Med Educ. 2013; 13:69.

14.    Morris MC, Gallagher TK, Ridgway PF. Tools used to assess medical students competence in procedural skills at the end of a primary medical degree: a systematic review. Med Educ Online. 2012; 17.

15.    Ali A, Habib SH, Shah I, Jan A, Yousafzai YM. Attributes of a good medical school: Faculty perspective in local context. Pak J Physiol. 2017; 13:48-51.

16.    Khan SM, Mahmood RA. Overview of examination standards and evaluation process conducted by out-sourced universities by pakistan medical and dental council for foreign qualified medical professionals in pakistan. J Ayub Med Coll Abbottabad. 2016; 28:126-9.

17.    AlHaqwi AI. Importance and process of feedback in undergraduate medical education in Saudi Arabia. Saudi J Kidney Dis Transpl. 2012; 23:1051-5.

18.    Khan MK. Indigenous Model of Higher Education Reforms in Pakistan: Higher Education Quality Assurance Initiatives: Quaid-e-Azam Campus, University of the Punjab Lahore-Pakistan. [Online] 2010 [Cited 2016 December 28]. Available from: URL: https://pdfs.semanticscholar.org/9128/9f2b4af6bf0ca0fa7ec029f7d94d03f94f78.pdf?_ga=2.105611331.1549414999.1590652442-565531328.1579848825.

19.    Wilkinson TJ, Hudson JN, McColl GJ, Hu WC, Jolly BC, Schuwirth LW. Medical school benchmarking–From tools to programmes. Med Teach. 2015; 37:146-52.

20.    Joshi MA. Quality assurance in medical education. Indian J Pharmacol. 2012; 44:285-7.

21.    Pandav CS. Role of faculty of medical colleges in national health policy and program development. Indian J Community Med. 2010; 35:3-6.

22.    Khalid T. Faculty perceptions about roles and functions of a department of medical education. J Coll Physicians Surg Pak. 2013; 23:57-61.

23.    Benjamin A. Audit: how to do it in practice. BMJ. 2008; 336:1241-5.


Journal of the Pakistan Medical Association has agreed to receive and publish manuscripts in accordance with the principles of the following committees: