Theme: Preparedness for practice

Is a Secondary Task Method Appropriate for Measuring Mental Workload in Medical Students?

*Miss Bryony Woods, Cardiff University, woodsb@cardiff.ac.uk

Context

Mental workload (MW) is an abstract concept that sees cognition as a small and finite capacity to process conscious, logical thoughts. A secondary task (ST), an additional task added on top of the primary task, is one way of measuring MW. When workload of the primary task approaches capacity, ST performance will decrease giving an objective measure of MW.

Objectives

This study aims to validate the ST method as a measure of MW in medical students. It is expected that the measured workload will increase with task complexity.

Methods

Medical students from year 2 to year 5 at Cardiff University were recruited. The ST involved tapping the screen of an iPhone® when it vibrated at random intervals. The time taken to do this was recorded. Each participant completed four standardised tasks for a total of four minutes each, alongside carrying out the ST. Task 1 measured participants’ baseline workload. Task 2 involved listening to a recorded history. Task 3 was undertaking venepuncture on a simulated arm and task 4 involved simulated venepuncture, alongside listening to another history.

Results

40 students were recruited. Measured workload increased with task complexity

Exploring the development of evaluative judgment: Illustrations from junior doctors’ longitudinal preparedness for practice narratives

*Rees CE, Bullock A, Mattick KL, Monrouxe LV

Continuous professional development is a critical constituent across the medical education continuum. Central to that is the development of learners’ evaluative judgment:1 ‘the ability to understand work quality and apply those standards to appraising performance’.2 Although medical students report better understanding of quality through peer-related observation, feedback and storytelling,2 the potential of narrative for exploring learners’ evaluative judgment is currently unknown. Innovative narrative methods in health professions education, such as longitudinal audio diaries,3 can provide learners with opportunities to make sense of their own professional development,3,4 with repeated acts of storytelling seen as an ongoing form of self-evaluative judgment.1 In this short presentation, we aim to explore what evaluative judgment means within the context of novice professional practice in the UK Foundation Programme, and how narratives can reveal the processes of evaluative judgments in medical trainees. We do this by analysing narrative excerpts from our GMC-funded study on junior doctors’ preparedness for practice.5 Although most current literature explores evaluative judgment using experimental approaches,6 we hope that our presentation will encourage qualitative approaches for exploring evaluative judgment in professional learning.

References

1. Cowan J. Developing the ability for making sound judgments. Teaching in Higher Education 2010;15(3):323-334.
2. Tai J J-M et al. The role of peer-assisted learning in building evaluative judgment: opportunities in clinical medical education. Advances in Health Sciences Education 2015; DOI 10.1007/s10459-015-9659-0.
3. Monrouxe LV. Solicited audio diaries in longitudinal narrative research: a view from inside. Qualitative Research 2009;9(1):81-103. 
4. Sadler DR. Beyond feedback: developing student capability in complex appraisal. Assessment & Evaluation in Higher Education 2010;35(5):535-550.
5. Monrouxe LV et al. How prepared are UK medical graduates for practice? Final report from a programme of research commissioned by the General Medical Council, May 2014 (see: http://www.gmc-uk.org/about/research/25531.asp).
6. Hess TM et al. The impact of experienced emotion on evaluative judgments: The effects of age and emotion regulation style. Aging, Neuropsychology and Cognition 2010;17:648-672.

*Corresponding author professor Charlotte Rees, Monash University, charlotte.rees@monash.edu

The implementation of a national exit assessment for clinical scientists in the UK: challenges, hurdles and triumphs

*Chamberlain S, Gay S

As part of the Modernising Scientific Careers initiative, the National School of Healthcare Science was tasked with designing and administering an exit OSCE for trainees following each healthcare science pathway on the Scientist Training Programme (Level 7, leading to registration as a Clinical Scientist). In 2016 there are 27 different healthcare science pathways (across life sciences, physical sciences, physiological sciences and bioinformatics), and a total of 252 trainees in their final year. The number of trainees per science and per OSCE ranges from one in cytopathology to 33 in radiotherapy physics. This paper outlines the challenges, struggles and triumphs experienced in creating and delivering these healthcare science OSCEs. These encompass a broad spectrum of issues from consulting and seeking consensus among stakeholders on the design of the assessment; the development of the policy infrastructure; technical issues such as the use of score weightings and standard setting; marking issues including the creation of a standardised mark scheme template and ensuring the quality and reliability of marking; the technology infrastructure, which required a data mark capture system that could cope with the complexity of the whole assessment environment; and the training programme that was delivered to prepare over 300 assessors and station writers for this new and unfamiliar mode of assessment. Some of the challenges were more easily overcome than others. Indeed, it is discussed how being unencumbered by legacy systems, investments, processes and preferences meant that there were opportunities to implement, from the beginning, some elements of good assessment practice. These included the introduction of on-screen marking, the use of domain-based mark schemes, and promoting ‘intelligent’ interpretations of the OSCE data. There were, however, a number of challenges; some of which remain unresolved.

*Corresponding author Mrs Sandie Gay, National School of Healthcare Science, sandie.gay@wm.hee.nhs.uk

Ensuring students’ preparedness for practice: a reflective framework for assessing capability

*Hanks S, *Neve H.

Medical and dental undergraduate education emphasises the development of student competencies. Expected competencies are detailed in national guidance and presented as learning outcomes, separated into domains such as biomedical, clinical and professional. Each domain tends to be assessed separately, using predictable and familiar tools and settings.

In this presentation we will question whether competency based education and assessment adequately trains our students to practice in today’s complex, ever changing healthcare environments. We will draw on research into students’ preparedness for practice to demonstrate how just ticking the competency box’ has often left young doctors and dentists unprepared and unsure how to tackle problems in the real world.

We will argue that we need to educate our students for ‘capability’ as well as competence. Building on the literature we will explore the nature of capability, its relationship to competency and the range of skills, such as the ability to formulate and solve problems in unfamiliar and changing settings, which underpin it. We will consider how capabilities are currently addressed (or not) within the continuum of assessment processes. Finally we will propose an assessment framework, which could be adapted to use in a range of assessment settings and which could support dental and medical students’ in their journey to become capable practitioners in a complex and unpredictable world.

*Corresponding author Mrs Sally Hanks, Plymouth University Peninsula School of Dentistry, sally.hanks@plymouth.ac.uk

Pharmacist-led video-based feedback to improve junior doctors’ prescribing

*Mattick K, Farrell O, Parker H, Bethune R. 

Prescribing errors occur frequently and may have significant adverse consequences. Recent research highlights challenges faced by newly qualified doctors when prescribing medications in busy hospital environments. The important contributions of socio-cultural determinants of prescribing within hospital settings are increasingly recognised, such as the role of prescribing etiquette (Charani et al. 2013) and the medical hierarchy (Mattick et al. 2014). Junior doctors frequently enact the prescribing decisions of more senior doctors (Ross et al. 2011), often without understanding the rationale for the therapeutic choice. In addition, junior doctors report a shortage of timely feedback on their prescribing performance (Mattick et al. 2014). Some studies make recommendations for interventions intended to support junior doctors to reduce medication errors. A common theme in the recommendations is an enhanced role for pharmacists, who are knowledgeable about medications and the prescribing process but also sit outside the medical hierarchy. In this presentation we will summarise our current research which involves developing a video-based feedback intervention. Foundation Year doctors (in their first two years after medical school graduation) are filmed during a patient consultation involving a medication history and any subsequent parts of the prescribing process which occur away from the patient e.g. writing up the drug chart. The pharmacist then confirms the medication history and meets up with the junior doctor for a tailored feedback session. Together they review the video footage and the pharmacist asks a series of questions using a Self-Regulated Learning framework, designed to promote reflection and improvement planning. In this presentation, we will use Van der Vleuten’s utility equation (1996) to explain how we have carefully designed the intervention with respect to its acceptability, cost and educational impact; why we have decided to emphasise validity over reliability; and what we have learned through the pilot work to date. Prof Karen Mattick*

The implementation of a national exit assessment for clinical scientists in the UK: challenges, hurdles and triumphs "As part of the Modernising Scientific Careers initiative, the National School of Healthcare Science was tasked with designing and administering an exit OSCE for trainees following each healthcare science pathway on the Scientist Training Programme (Level 7, leading to registration as a Clinical Scientist). In 2016 there are 27 different healthcare science pathways (across life sciences, physical sciences, physiological sciences and bioinformatics), and a total of 252 trainees in their final year. The number of trainees per science and per OSCE ranges from one in cytopathology to 33 in radiotherapy physics.

This paper outlines the challenges, struggles and triumphs experienced in creating and delivering these healthcare science OSCEs. These encompass a broad spectrum of issues from consulting and seeking consensus among stakeholders on the design of the assessment; the development of the policy infrastructure; technical issues such as the use of score weightings and standard setting; marking issues including the creation of a standardised mark scheme template and ensuring the quality and reliability of marking; the technology infrastructure, which required a data mark capture system that could cope with the complexity of the whole assessment environment; and the training programme that was delivered to prepare over 300 assessors and station writers for this new and unfamiliar mode of assessment.

Some of the challenges were more easily overcome than others. Indeed, it is discussed how being unencumbered by legacy systems, investments, processes and preferences meant that there were opportunities to implement, from the beginning, some elements of good assessment practice. These included the introduction of on-screen marking, the use of domain-based mark schemes, and promoting ‘intelligent’ interpretations of the OSCE data. There were, however, a number of challenges; some of which remain unresolved.

* Corresponding author Prof Karen Mattick, University of Exeter, Centre for Research in Professional Learning, k.l.mattick@exeter.ac.uk

Developing an Assessment Strategy for the 21st Century

*Coombes L, Metcalf E, Masefield R, Davies G, Smith P, Riley S

Cardiff School of Medicine is currently rolling out a new curriculum, known as C21 and designed to be a modern curriculum for the 21st century. As part of this process, assessment has been rethought with a move away from traditional models of assessment to better prepare students for foundation practice by supporting reflection and self-directed learning.

This session will discuss how the assessment strategy was developed and what it includes, particularly focusing on the principles of frequent look/rapid remediation and programmatic assessment it is based on. As part of the new strategy, updated clinical and knowledge assessments have been introduced with the aim of supporting student progression through frequent low stakes assessments and by providing detailed feedback within and across domain based assessment, while emphasising an integrated, holistic approach to patient assessment, clinical reasoning and care planning. The aim is also to create an assessment programme that strives for authenticity through formative, summative, simulation and workplace assessment. This allows progression decisions that are educationally, statistically, academically and legally defensible.

It will also discuss the challenges in implementing the strategy and ways in which these challenges are being overcome. Success will be measured through improvements to student survey scores and preparation for practice, while maintaining acceptance by stakeholders. Finally, it will examine the potential impact and implications of the forthcoming medical licensing exam on modern assessment programmes where traditional ‘finals’ have been phased out to support a more gradual transition into practice.

*Corresponding author Lee Coombes, Cardiff University School of Medicine, coombesl2@cardiff.ac.uk