
The Views of Medical Students about the Purpose and Objectivity of Assessment in a Medical College in Western Nepal
Correspondence Address :
Dr. P. Ravi Shankar,KIST Medical College,P. O. Box 14142, Kathmandu(Nepal). Phone: 00977-1-5201680 Fax:00977-1-5201496.e.mail:ravi.dr.shankar@gmail.com
ABSTRACT
Context: Previous studies had shown problems with the different methods of assessment in medical schools. However, studies in Nepal are lacking.
Objective: The present study was carried out to obtain information on the purpose of assessment in an ideal world and at the Manipal College of Medical Sciences (MCOMS) and the perception of student respondents regarding the objectivity of assessment at MCOMS.
Methods: The study was carried out among the second to seventh semester students during February 2006, using a three part semi-structured questionnaire. The first part collected basic demographic information, the second was related to the purpose of assessment in an ideal world and at MCOMS, and the third dealt with the perceived objectivity of assessment at MCOMS. Percentage agreement scores were compared among the basic science (semesters II, III and IV) and clinical science students (semesters V, VI and VII) using the 2 test (p<0.05). The median total scores were compared among different subgroups using appropriate non-parametric tests (p<0.05).
Findings: 340 students participated in the study (overall response rate- 74.1%). 165 respondents (48.5%) were basic science students. 166 students (49.9%) were Indians, 145 (43.5%) were Nepalese and 22 (6.6%) were Sri Lankans and others. Basic science students were significantly more likely to agree that assessment at MCOMS ensured competence, provided feedback and guided student learning. The median total score was 22 (maximum score -32). The score was higher among basic science and Sri Lankan students.
Conclusions: The overall perceived objectivity of assessment at MCOMS was not high. Modifications in the assessment system may be considered. Further studies are required.
Educational measurement, Evaluation, Medical students, Nepal, Questionnaires
Introduction
Medical schools are moving towards a student-centred approach to medical education, with students taking increasing responsibility for their learning (1), (2).
Previous research had shown that changes in the evaluation/assessment often lag behind changes in the curriculum and learning methodology (3). In 1999, a report by the General Medical Council (GMC) had noted that the recommended development of appropriate assessment methods for a modified curriculum was awaiting implementation at that time (4).
The Manipal College of Medical Sciences (MCOMS), Pokhara, Nepal, admits students from Nepal, India, Sri Lanka and other countries for the four and a half year undergraduate medical (MBBS) course. The course is divided into nine semesters. The basic science subjects of Anatomy, Physiology, Biochemistry, Microbiology, Pathology, Pharmacology and Community Medicine are taught in an integrated, organ-system based manner during the first four semesters. The teaching of Community Medicine continues till the seventh semester and the clinical subjects are taught during the last five semesters of the course.
The department of Pharmacology conducts problem-stimulated learning (PSL) sessions (5) and organizes teaching-learning sessions on communication skills (6). The College is affiliated to the Kathmandu University for MBBS teaching. The revised curriculum of Kathmandu University emphasizes student-centred, problem-based, integrated teaching and learning with early patient contact (7).
The evaluation system at MCOMS continues to be traditional. The theory assessment is subjective, using short-answer type questions. Every fortnight, during the first four semesters, the students are evaluated in all the seven subjects during the fortnightly tests (FNTs). There are also end of semester examinations and university examinations at the end of the second and fourth semesters. On the clinical side, there are monthly assessment tests, semester examinations and university examinations at the end of the seventh and ninth semesters.
Practical evaluation in basic sciences is carried out during the semester and the university examinations. Spotters, practical exercises, prescription writing, communication skills assessment, clinical problems and slides are some of the exercises carried out. The students also appear in a viva-voce. Spotters is a practical exercise where the student is asked to identify a drug, a specimen or an instrument and answer a question related to it in a time duration of three minutes. Viva-voce is an oral examination where the examiner tries to evaluate student knowledge. In the clinical sciences, practical evaluation is carried out at the end of each clinical posting and during the semester and university examinations. Long cases, short cases, spotters, statistics problems and clinicosocial cases are some of the exercises. Viva-voce is also conducted.
Previous studies have described the value of student feedback about evaluation/assessment (3),(8). Information on the students’ opinion regarding the evaluation system at MCOMS is lacking. Hence, the present study was carried out. The objectives of the study were to obtain:
a) The views of the medical students about the purpose of evaluation in an ideal world and at MCOMS
b) The views of the medical students about the objectivity of the evaluation at MCOMS
c) Additional comments about the evaluation system and to
d) Note the association, if any, between the student views and their demographic and personal characteristics.
Methods
The study was carried out among the second to seventh semester students of MCOMS, Pokhara, during the month of February 2006. The first semester students lacked exposure to some of the assessment methods and were excluded. The students’ views about the purpose and the fairness of evaluation were obtained using a semi-structured questionnaire. The authors consulted with Spencer JA, the author of the study on the evaluation/assessment at the Newcastle Medical School in the United Kingdom, regarding the questionnaire and the conduct of the study (1). The questionnaire used was adapted from that used in the previous study (1).
Informal discussions were held with the students and with the teaching staff of MCOMS. The questionnaire developed was pilot tested among a group of eight fourth semester students. The pilot testing concentrated on determining whether the respondents were able to understand the questions and the statements. The students had no difficulty in understanding the questionnaire. Their responses were not included in the final analysis. The questionnaire used, is shown in the Appendix.
The questionnaire consisted of three parts. The first part collected basic demographic information about the respondents. The second part related to the purpose of evaluation in an ideal world and at MCOMS. The third part dealt with the objectivity of the evaluation at MCOMS. In the last two sections, the students were asked to tick strongly disagree, disagree, agree or strongly agree in response to each statement. Space was provided for free text comments in the questionnaire. The students were explained the objectives of the study and were invited to participate in it. Written informed consent was obtained from the study participants.
We gave the same five purposes for evaluation/assessment as used in a previous study (1). The purposes were to ensure competence, to provide feedback to the students, to evaluate the curriculum, to guide learning and to predict future performance as a doctor. Two sets of responses were elicited. The first was in an ideal world and the second was based on their experiences at MCOMS.
The section on the objectivity of evaluation at MCOMS dealt firstly with the evaluation process on the whole and then with individual methods of evaluation. In contrast to that reported from Newcastle (1), the methods of evaluation at MCOMS were fewer and were more uniform across the semesters. As detailed in the introduction section, there were differences between the basic and clinical semesters. Two different sets of questionnaires (differing in the third section) were submitted to the basic science and clinical science semesters.
The demographic information collected, were age, gender, semester of the study, nationality and the source of financing of medical education. The occupation of the parents and self-assessment of the academic performance was noted.
The questionnaires were distributed during the pharmacology practical sessions for the second, third and fourth semester students and during the problem-based community medicine sessions for the fifth, sixth and seventh semesters. For the second section, a ‘percentage agreement’ score was calculated by aggregating the responses ‘strongly agree’ and ‘agree’. The responses ‘disagree’ and ‘strongly disagree’ were aggregated to form a ‘disagreement score’. The percentage agreement scores were compared for an ideal world and at MCOMS using a chi square (2) test. A p value less than 0.05, was taken as statistically significant.
The objectivity of the overall evaluation and for different methods was studied in two ways. For each parameter, the number and the percentage of individuals who were strongly disagreeing, disagreeing, agreeing and strongly agreeing were calculated. For both the clinical and basic science students, there were eight questions about the objectivity of evaluation. Each statement was scored according to the following criteria: 1= strongly disagree, 2= disagree, 3= agree and 4= strongly agree. The total scores of the eight statements were added together and the median total objectivity score and the interquartile range were calculated. The median objectivity scores were compared among the different subgroups of the respondents using the Mann-Whitney test for dichotomous variables and the Kruskal-Wallis test for the others (p<0.05).
The free text comments were grouped together into those dealing with the purpose of evaluation in an ideal world and at MCOMS and the comments regarding the objectivity and the improvement of evaluation. Epi Info and the Statistical package for social sciences (SPSS version 12 for windows) were used for statistical analysis.
A total of 459 questionnaires were distributed, of which 340 were completed and returned. The overall response rate was 74.1%. The highest response rate (86.9%) was among the sixth semester students and the lowest (62.7%) was among the fifth-semester students. One hundred and sixty five (48.5%) respondents were in the basic science course, whereas 175 (51.5%) students were in clinical sciences. There were 197 male respondents (59.2%) and 136 (40.8%) female respondents. One hundred and sixty-six (49.9%) students were Indian nationals, 145 (43.5%) were Nepalese and 22 (6.6%) students were from Sri Lanka and other countries (Information about gender and nationality was missing in seven questionnaires).
Purposes of Evaluation
Break down of the students’ responses about the purposes of evaluation in an ‘ideal world’ and at ‘MCOMS’ is shown in (Table/Fig 1). The students’ opinion about what ideally the purposes of evaluation should be, refers to the ‘ideal world’. ‘MCOMS’ refers to whether they agreed that these purposes of evaluation were fulfilled at MCOMS. The students were of the opinion that the purposes of evaluation at MCOMS were different in some respects from those in an ideal world. The student percentage agreement with the statements that the purpose of evaluation at MCOMS was to ensure competence, to promote feedback and to guide student learning, was significantly lower as compared to in an ideal world.
The basic science students commented that the purposes of evaluation in an ideal world were to guide learning, provide feedback to the students, to know the understanding level of the students, to ensure competence and for the self-evaluation of the students. The clinical science students were of the opinion that evaluation was aimed to evaluate student knowledge, to ensure competence, to provide feedback to students, to guide learning and to evaluate the teaching system.
The purposes of evaluation at MCOMS according to the students, were to make them regular in studies, to get feedback, to prepare them for the university examinations and to provide guidance according to the basic science students. According to the clinical science students, the purposes were to prepare the students for university examinations, to force students to study and to evaluate the teaching system.
Objectivity of Evaluation
The students’ views about the objectivity of the different methods of evaluation at MCOMS are presented in (Table/Fig 2). Of the 336 respondents, 210 (62.5%) agreed that on the whole, the evaluation at MCOMS was fair. Among both basic and clinical science students, the highest percentage agreement was for spotters (270/336 i.e. 80.4%) and the least for viva voce (169/336 i.e. 50.3%).
The median total objectivity score was 22 (interquartile range 19-24). The total agreement score on a Likert scale about the objectivity of the methods of evaluation followed at MCOMS, was compared according to gender, nationality, the phase of the study (Basic or clinical science), parents’ occupation, selection criteria and the students’ self-assessment of their academic performance. The results are shown in (Table/Fig 3). The difference in the total score was statistically significant between the basic and clinical students and among students of different nationalities.
The problems with the evaluation and suggestions for improvement at MCOMS according to the basic science students were that there was often lack of objectivity in assessment, the evaluation process was sometimes superficial, that the attitude and behaviour should also be assessed and that there should be more stress on the practical aspects. The clinical students were also of the opinion that the evaluation was not always objective, there were no multiple choice questions, the evaluation should concentrate more on the practical aspects and that the results of the evaluation were sometimes not taken seriously.
A concern has been expressed recently, that new doctors are not well prepared to meet the expectations of society (9). Many medical educators also share these concerns. In developed countries, a number of initiatives have been introduced to strengthen and improve medical education. Changes in assessment often lag behind changes in curriculum and learning (3).
A survey from Finland had shown that there were problems concerning traditional assessment practices (10). A previous study in South Africa had investigated student attitudes towards the objective structured clinical examination (OSCE) and the conventional assessment methods (11).
Students displayed a positive attitude towards OSCE and regarded it as an excellent alternative to traditional oral examination. In our study also, students showed a high degree of agreement with the statement that the assessment in spotters, communication skills and OSCE was objective.
A previous study had shown a clear difference in the perception between students in phase 1 and phase 2 of the course (1). In our study, the basic science students had a more positive opinion regarding the objectivity of the assessment at MCOMS. The free text comments also indicated that basic science students were happier with the evaluation system. The reasons for these differences should be investigated in detail in future studies. A limitation was that the eighth and ninth semester students were not included in the survey. The varying response rates of the different semesters could also have influenced the results.
The percentage of students agreeing with the different purposes of evaluation in an ideal world was lower than that observed previously (1), except for the purpose of predicting their performance as doctors. The reasons for these differences can be an interesting subject for future research. A greater percentage of students agreed that the purpose of assessment at MCOMS was to provide feedback and to predict their performance as doctors as compared to that at Newcastle.
The percentage agreement that the overall evaluation process at MCOMS was objective was lower than that observed at Newcastle. Problems were noted with evaluation in monthly and fortnightly tests, viva voce, university theory papers, practical exercises and end posting tests. In a previous study (1), students showed a low percentage agreement with fairness of assessment during clinical rotations, poster presentations and viva voce. The reliability and fairness of the viva voce examinations is considered to be unacceptably poor (12). The clinical students had made a comment that the evaluation at MCOMS encouraged rote learning. The examinations require the reproduction of a large quantity of factual information and many students learn by rote. They may have confused a consequence of the examination system for an objective.
The perceptions of fairness can influence the acceptability of an assessment instrument (1). In general, the evaluation methods with more clearly delineated methods of marking were rated as more fair and objective by the students. Putting in place a system of the evaluation for viva voce and written examinations, communicating the system to students and ensuring compliance with the evaluation system can be considered to improve objectivity.
The median objectivity score was higher among the Sri Lankan students as compared to other nationalities. However, the lower number of Sri Lankan students may have influenced the results. The comment that sometimes, the evaluation was carried out in a superficial fashion, should be thoroughly investigated. There should be more emphasis on practical evaluation and MCQs may be considered for inclusion in the evaluation.
Like in the previous study, students desired more feedback on their evaluation as a means to guide learning. Students at Newcastle had said that without adequate feedback, assessment could not be used as tool to inform the learning process (1). Australian medical educators have suggested that formative assessment opportunities for providing student feedback should be included in the curriculum (13),(14). Prompt, detailed and meaningful feedback should be provided to the students.
A previous paper had detailed that good professional regulation depends on good assessment (15). Transparent performance criteria and formative feedback can help improve testing (15). The authors had stated that the purpose and intended focus of the assessment should be clearly defined and an appropriately designed pilot study should evaluate feasibility, acceptability, validity and reliability. This process can be helpful even for established assessment methods.
Our study had many limitations. Firstly, students of the eighth and ninth semesters were not included. The response rates of the different semesters varied. The respondents were at different points in their assessment experiences and this may have affected their responses. Detailed analysis of the reasons for the particular responses and comments were not carried out. Around 26% of the respondents did not participate in the study. The reasons for their non-participation were not investigated. The student perception of their evaluation in the clinical sciences may be influenced by their past experiences in the basic sciences. This influence was not evaluated. Study anonymity was maintained and we did not correlate the student perceptions about evaluation with their performance in the evaluation examinations. Also the study was carried out in 2006 and reflects student perceptions at that time which may not reflect the present situation.
The overall agreement with the objectivity of evaluation at MCOMS was low. Evaluation in monthly and fortnightly tests, practical exercises, viva voce and end posting examinations were not considered objective. Students wanted a more holistic pattern of evaluation, taking into consideration the attitudes and behaviour also. MCQs and more emphasis on practical evaluation were stressed.
The results of this preliminary study indicated that the evaluation system at MCOMS may need to be modified. However, further studies are required.
Appendix
Appendix: Questionnaire distributed to Basic Science students
Medical students views about the purpose and fairness of assessment
Age: Sex: M/F Semester:
Nationality: Govt. selected/Self-financing
Self-assessment of academic performance: Excellent/Good/Average/Poor
Occupation of parents: Father: Mother:
For the following statements tick the response which you think is appropriate
In an ideal word
(1) The purpose of assessment is to ensure competence.
Strongly disagree/disagree/agree/strongly agree
(2) The purpose of assessment is to provide feedback.
Strongly disagree/disagree/agree/strongly agree
(3) The purpose of assessment is to evaluate the curriculum.
Strongly disagree/disagree/agree/strongly agree
(4) The purpose of assessment is to guide student learning. Strongly disagree/disagree/agree/strongly agree
(5) The purpose of assessment is to predict performance
as a doctor.
Strongly disagree/disagree/agree/strongly agree
Comments about the purpose of assessment in an ideal
world (the factors listed):
At the Manipal College of Medical Sciences, Pokhara
(1) The purpose of assessment is to ensure competence.
Strongly disagree/disagree/agree/strongly agree
(2) The purpose of assessment is to provide feedback.
Strongly agree/agree/disagree/strongly disagree
(3) The purpose of assessment is to evaluate the curriculum.
Strongly disagree/disagree/agree/strongly agree
(4) The purpose of assessment is to guide student learning.
Strongly disagree/disagree/agree/strongly agree
(5) The purpose of assessment is to predict performance as a
doctor.
Strongly agree/agree/disagree/strongly disagree
Comments about the purpose of assessment at MCOMS
(the factors listed):
Any further purposes of assessment:
Objectivity of assessment:*
1) The assessment process at MCOMS overall is objective.
Strongly disagree/disagree/agree/strongly agree
2) The assessment in the FNTs is objective.
Strongly disagree/disagree/agree/strongly agree
3) The assessment of the University theory papers is objective.
Strongly disagree/disagree/agree/strongly agree
- Emerging Sources Citation Index (Web of Science, thomsonreuters)
- Index Copernicus ICV 2017: 134.54
- Academic Search Complete Database
- Directory of Open Access Journals (DOAJ)
- Embase
- EBSCOhost
- Google Scholar
- HINARI Access to Research in Health Programme
- Indian Science Abstracts (ISA)
- Journal seek Database
- Popline (reproductive health literature)
- www.omnimedicalsearch.com