Theme: Assessment across the continuum/across borders

Rate of knowledge acquisition over 5 years

*Freeman AC, Rice N, Roberts F.

In a 5 year undergraduate medical course with a PBL and spiral curriculum knowledge acquisition is measured with progress tests. Peninsula medical school has been monitoring the rate of growth of knowledge over the 5 years of the course. Data will be presented to illustrate that growth of knowledge. The data will be used to illustrate how there are some changes to that growth rate which might have implications for the timing of final or licensing examinations.

*Corresponding Author Prof Adrian Freeman, University of Exeter Medical School,

Cultural values assessment in a training GP practice

*Inkster C, Agius S, Hayden J

Learning environment and culture have been identified as the first theme in the new standards for training published in the GMC's Promoting Excellence document. Similarly, HEE have identified this as the first theme for their multi professional educational standards. Assessment of culture and values in an organisation can pose many challenges. We describe the use of a tool developed by the Barrett Values Centre to assess the culture of a successful training GP practice. This requires participants to choose their top ten personal values from a list of approximately 80. They then choose the top ten values that they feel describe the current culture of the organisation, followed by the top ten values they feel would be required for the organisation to achieve its maximum potential. A comparison between personal values and current culture is a measure of alignment of employees' values within the organisation. The difference between the current and desired culture can illustrate the views of the staff on how the organisation might develop in the future. We discuss the results of the survey and the positive impact they have had on aligning values in the practice. Results revealed a number of key themes which had not been anticipated prior to the implementation of the assessment. This has enabled development of an action plan to support all staff and learners to achieve their full potential in a compassionate and caring environment to the ultimate benefit of patients.

*Corresponding author Mrs Clare Inkster, Health Education England (North West local team),

The MRCGP International process (MRCGP INT)

Corresponding author, *Prof A.C.Freeman, Prof Val Wass, Prof R. Withnall


Although family medicine (FM) curricula around the world have aspects of commonality, there are also country-specific differences that reflect varying cultural influences and healthcare delivery systems. The UK Membership of the Royal College of General Practitioners International qualification (MRCGP[INT]) aims to achieve standardisation through assessment methodology and validity, in order to enhance the standing of FM as a speciality and improve the quality of patient care. Has this been achieved?


The MRCGP[INT] process enables individual sites to develop assessment methodologies that are appropriate for their country/region. Assessment experts make site visits to help support educational development and the establishment of rigorous examinations consistent with international standards. Different external UK evaluators then assure the process and accredit the examinations.


MRCGP[INT] currently operates in eight sites over four continents. Over 800 doctors have achieved the MRCGP [INT] qualification. We will present data illustrating: the types of assessment chosen by different countries; the varying amounts of time and support required to reach accreditation; and the parameters chosen for evaluation. Changes that have occurred as a result of the process will be presented, including inter-site support and impact.


The expanding UK MRCGP[INT] programme illustrates the importance of empowering the contextual development of FM accreditation within its local health care system and culture.

*Corresponding author Professor Adrian Freeman, University of Exeter Medical School,

AMSE - Quality Assurance Initiative

*Prof. Dr. Peter Dieter, AMSE, Association of Medical Schools in Europe,

AMSE is the Association of Medical Schools in WHO Europe, a region of 54 countries. Currently, students are trained at about 500 medical schools in WHO Europe. The number of medical schools worldwide and in Europe has increased in recent years significantly, among them many private/profit medical schools without own research.

Medical education in the different countries varies, considerably in the length of study, the training program including teaching and examination formats, state examinations, scientific orientation and the cooperation of the medical school with affiliated teaching hospitals and community practices.

Also the quality standards of the education programs and the institutions involved in the education (school, hospitals, and community practices) as well as the quality control and quality recognition are country specific. Accreditation by nationally recognized accrediting agencies occurs only at about 50% of the medical schools.

The European Professional Qualifications Directive (2013/55/EU - 2005/36/EG) defines as ""quality"": ""Basic medical training includes at least five years (can also be expressed in the corresponding number of ECTS credits) and consists of at least 5500 hours of theoretical and practical training at a university or under the supervision of a university. Furthermore, the Directive includes the automatic recognition of formal qualifications of medical doctors with basic training within the EU countries, the introduction of a European professional card is planned.

AMSE is concerned that the lack of a Europe-wide common quality standard and common quality assurance program including recognition on the one hand and the automatic recognition of doctor’s licenses on the other hand might lead to a risk of patients and health care system in the future. Therefore, AMSE calls for the introduction of a common quality standard (WFME) and a common quality assurance program across Europe.

Developing a Cross-Border Assessment Collaboration in Global Health

*Dr Jacob Pearce, Australian Council for Educational Research,

This paper reports on a project designed to develop an assessment collaboration between medical schools in both Australia and the United Kingdom in the content area of Global Health. The work involved universities in Australia and the UK, developed an Assessment Framework for assessing Global Health internationally, developed Item Specifications, undertook assessment item writing workshops, built in a process of review, and resulted in the development of a focussed suite of assessment items.

This paper provides an overview of the processes undertaken in developing this collaboration. It begins by providing a brief background to the project, the rationale for the Global Health focus, and highlights the partnerships that the project developed. It then outlines the aims and objectives of the project. Importantly, the aim of the project was to improve and share assessment practice in the Global Health arena. The goal was to ‘pool resources’ to work with and for the participating medical schools to produce a suite of high-quality and relevant assessment items that could be used by the schools in whatever context they wished. The approach taken in the project will be detailed, following four broad stages: Defining Global Health and building an Assessment Framework; Specifying Item Parameters; Development of Items; and Consolidation of a Suite of Assessment Items.

The outcomes of the project are presented, along with reflections on the implementation and outcomes of the work. While the area of Global Health seems suitable for collaborative assessment across borders, a number of key issues were identified throughout the project. These key issues will be identified, both in relation to the content of Global Health, and to the process of more general cross-border sharing of assessment materials.

Equalities and differences in the curricula of the Dutch Progress Test Consortium members.

*C. Krommenhoek-van Es MSc, Dr. A.J. Bremers, Dr. R.A. Tio, University Medical Centre Groningen.

Four times a year, about 10,000 students of five distinct universities participate in the Dutch Progress Test of Medicine to measure their acquired knowledge. A Progress Test examines the knowledge at the level of the end of the curriculum. In contrast to a ‘normal’ final exam, all students participate, irrespective of the year they belong to. The increasing knowledge level of the individual student is reflected in an increasing score over the years. Judgement is based on a comparison between students within the same cohort. The Dutch Progress Test consortium has developed this test, in close cooperation with medical schools, however, local curricula differ, as do the practices of the boards of examiners. The information that is gained by progress testing can be used at several levels. We will focus on the comparison of different student cohorts of one university and student cohorts of different universities. In this way, insight into the results of the various curricula can be obtained. Moreover, curriculum changes can be monitored and effects, positive or negative, can be demonstrated at the university level. Over the last four years, three participating medical schools have undergone intensive curricular changes. What can we learn from the differences in scores between the respective cohorts of students?

*Corresponding author C. Krommenhoek-van Es MSc, Leiden University Medical Centre, 

Determining the Influence of Student Ethnicity on OSCE Examiners’ Judgments

  • Dr Peter Yeates, Keele University, School of Medicine
  • Corresponding author Yeates, P*


Students from minority ethnic (ME) backgrounds achieve lower average scores than white students, particularly on communication assessments. Whether this arises due to examiner bias or some other curricular influence is controversial. Some medical educators describe stereotyped views of south Asian students’ performance: good formal knowledge, but poor communication skills. This study investigated the influence of students’ ethnicity (white vs south Asian) on OSCE examiners’ scores, feedback, cognitive activation of stereotypes and performance-memory.


Randomised, blinded, 2 group experiment. 3 scripted performances were filmed by both white and Asian actors: P1 showed Strong Knowledge/Weak Communication; P2 Weak Knowledge/Strong Communication; P3 mixed ability.
Student ethnicity in each performance varied by group: Group 1=Stereotype consistent: P1=Asian, P2=White, P3=Asian; Group 2=Stereotype inconsistent: P1=White, P2=Asian, P3=White. 158 UK OSCE examiners: watched and scored performances; provided feedback; performed a lexical decision task to measure stereotype activation; and completed a recognition-based memory test.


Students’ ethnicity had no influence on examiners’ scores: Knowledge scores (out of 7.0) for Asian and White candidates were 3.9(95%CI=3.8-4.0) and 3.9(3.8-4.0), respectively (p=0.77); Communication scores were 3.9(3.8-4.1) and 3.9(3.7-4.0), respectively (p=0.31); Overall ratings were 3.1(2.9-3.3) and 3.1(3.0-3.3), respectively (p=0.88). Neither valence nor content of examiners’ feedback was influenced by students’ ethnicity.
The lexical decision task suggested that participants activated mental stereotypes: both groups responded to Asian-stereotype words more quickly (mean=716ms; 95%CI=702-731ms) than neutral words (769ms; 753-786ms) or non-words (822ms; 804-840ms), all p