Theme: Performance based assessments

Download a PDF of the abstracts: Performance based assessments

Composite Reliability of Workplace Based Assessment

*Nair B, *Frank I, *Matheson C, Parvathy M, Moonen-van Loon J, van der Vleuten C.

International Medical Graduates (IMG) assessment is a global issue. Most of the assessments are done on competency basis. However the ideal assessment should be based on performance. In 2010 we set up an IMG assessment program and so far over 160 IMGs have been assessed using miniCEX ,CBD and MSF assessments in addition to the In training Assessments. While most of the individual tools have been researched for their reliability , the "tool box " has not be tested for the reliability in IMG assessment In this presentation , we will discuss the way we assess IMGs and the composite reliability of 12 minicex , 5 CBD and 6 MSF assessments for each IMG. This combination reaches a reliability of 0.89 which is excellent in any summative assessment we are accredited to do this assessment on behalf of the Australian Medical Council. We believe this assessment can be used for performance assessment in other settings.

* Corresponding author Professor Kichu Balakrishnan Nair AM, CMPD, John Hunter Hospital, Hunter New England Health, kichu.nair@newcastle.edu.au


Assessing attitudes, performance and debriefing experience in simulation-based inter-professional learning: a tale of three instruments and the limitations of Cronbachâs Alpha.

*Roberts MJ, Gale TC, Endacott R, O’Connor A

Inter-professional learning (IPL) can improve attitudes and awareness of other professionals roles and may improve patient outcomes. Simulation is increasingly used for IPL and specific learning outcomes have been developed. Educators in the health care professions need to understand student attitudes toward the use of IPL in order to improve its relevance and effectiveness. The skill of the debriefer is known to be the strongest independent predictor of the overall quality of simulation encounters1. Reliably assessing constructs such as attitudes to IPL and debriefing quality can be problematic however. As part of a study centred on developing IPL in undergraduate medical and nursing programmes we measured students attitudes to IPL, team performance during simulated scenarios and experience of post-scenario debriefings. Using three published instruments, the KidSIM ATTITUDES tool2, the Team Emergency Assessment Measure (TEAM)3 and the Debriefing Experience Scale (DES)4, we aimed to make three comparisons. The first was to compare students attitudes to IPL before and after participating in simulation-based IPL sessions. The second was to compare peer ratings of team performance between the first and second scenarios in each session. The third was to compare group experiences of debriefing between the sessions conducted before and after an intervention aimed at improving faculty debriefing skills. The three chosen measurement instruments have good published reliability: Cronbachâs Alpha of 0.95 (ATTITUDES), 0.89 (TEAM) and 0.93 (DES). Results from the study gave comparable figures: 0.93 (ATTITUDES), 0.57 to 0.89 across different scenarios (TEAM) and 0.88 to 0.98 across different debriefing sessions (DES). Despite these results only the first two instruments were sufficiently reliable to enable us to make our planned comparisons. We will explain, through generalisability analyses, why this was so and offer our experience as a cautionary tale to others seeking published measurement instruments to use in their own studies. References: 1. Fanning RM, Gaba DM. The Role of Debriefing in Simulation-Based Learning. Simulation in Healthcare 2007; 2: 115-25. 2. Sigalet E, Donnon T, Grant V. Undergraduate Students' Perceptions of and Attitudes toward a Simulation-Based Inter-professional Curriculum: The KidSIM ATTITUDES Questionnaire. Simulation in Healthcare 2012; 7: 353-8. 3. Cooper S, Cant R, Porter J, et al. Rating medical emergency teamwork performance: Development of the Team Emergency Assessment Measure (TEAM). Resuscitation 2010; 81: 446-52. 4. Reed SJ. Debriefing Experience Scale: Development of a Tool to Evaluate the Student Learning Experience in Debriefing. Clinical Simulation in Nursing 2012; 8: e211-e7.

*Corresponding author Mr Martin Roberts, Plymouth University, martin.roberts@plymouth.ac.uk

 

Evolving from an ISCE to an OSCE

*Kerr JP, *Rice N, *Bradley S, *Freeman A

Prior to 2015 the Peninsula Medical College of Medicine and Dentistry undertook an Integrated Structured Clinical Exam (ISCE) for its year two students. This was a series of six twenty minute stations conducted on real patients within the Clinical Skills Resource Centre. For students failing to meet the standard as set by the Hofstee method, students undertook a second six by twenty minute station exam after a short period of remediation. The exam was commended for its validity, including the use of real patients. Comments from external examiners highlighted issues of reliability, particular due to variation between patient volunteers within the same station. There was also only a direct link between ISCE stations and the taught curriculum and its related learning outcomes. To address these comments we describe our experience of moving to an Observed Structured Clinical Exam (OSCE); this involved performing a twelve stations of ten minutes each; new materials had to be written, and these were built around the learning outcomes from the taught clinical skills programme. All stations were trialled in advance using volunteer fourth year students, and adjustments made based on feedback from them, patient volunteers and examiners. A switch to the borderline regression method of marking was made. New examiner training materials were developed; this included an innovative use of electronic voting equipment to provide real time benchmarking on the day of the exam. Extensive communication was require between the faculty and the student body to explain the changes; this was provided in electronic and lecture form, as well as representation to the student liaison committee and faculty’s board of studies. The examination went smoothly, and feedback from examiners, students and external examiner was in the vast majority positive. Statistics showed a significant improvement in reliability.

*Corresponding author Dr Paul Kerr, University of Exeter Medical School, p.kerr@exeter.ac.uk

 

Crossing boundaries: Consultant Clinical Healthcare Scientists ensuring the cut above

*Gay S, Chamberlain S

As part of the Modernising Scientific Careers vision, a standardised, prescriptive assessment strategy has worked well for the blended, post graduate L7 Scientist Training Programme across nearly 30 diverse healthcare sciences. The strategy is detailed, right down to defining a minimum number of DOPs, CBDs, OCEs and MSFs to be completed in the workplace, the academic Masters and the final exit assessment (OSCE) as STP trainees have not experienced this type of training framework or assessments before. Time and effort has been invested into ensuring that the exit assessment, the outcome of which declares the trainee is fit to practise, is standardised with consistent regulations and policies applied across the specialisms. This paper highlights that the same strategy for the exit assessment cannot cross the boundary from L7 to the L8 Higher Scientist Specialist Training programme. HSST doctoral level trainees will go on to become the leaders, innovators and shapers of their specialist workforce or field of practice. More is expected of these future consultant clinical scientists to ensure patients benefit from the most up to date scientific developments, therapies and treatments. It was essential for the School to recognise that these trainees must be able to demonstrate that they are a cut above the rest and this is not without its challenges. We discuss how, within an overarching qualification led framework and aligned to a set of broadly defined proficiency standards and competencies, HSST trainees, charged with designing their own assessment evidence using any suitable format, will demonstrate that they are up to meeting their own individual, context specific, challenge and aspiration to their future role. So far, understanding and applying this self-directed assessment strategy has been the biggest hurdle for these future leaders and actually a demonstration of their capability at this level.

*Corresponding author, Mrs Sandie Gay, National School of Healthcare Science, sandie.gay@wm.hee.nhs.uk

 

Development of an Inter-professional Clinical Assessment Support Network

*Metcalf EP, Jenkins S, Goodfellow R, Coombes L, Masefield R, Jones A, Hodson K, Ryan B, Bansal M, Dummer P

Six Schools within the College of Biomedical and Life Sciences, Cardiff University use OSCEs and related clinical assessments to assess students registered on health related programmes. It was identified that differing approaches to clinical assessments were in place and therefore there was an opportunity to learn from the diverse expertise of our inter-professional colleagues.

Aim

A collaborative approach was implemented to improve the quality assurance, efficiency and student experience of clinical assessments in Cardiff identifying academic and administrative best practice.

Method

In 2015 the College OSCE Project group was established with representation from all Schools with the aim of:

- Highlighting best practice, identifying issues, risks and solutions

- Identify similar activities across Schools and adoption of similar policies

- Collaboration with Registry to review logistics and quality assurance requirements

- Share and disseminate good practice and support the ongoing development of OSCEs across all schools, based on best evidence from the educational literature • Promote best practice through training and peer observation

- Develop coordinated requirement specifications for an IT solution to manage OSCEs, with the primary aim of reducing administrative workload and minimise risk

Results

The group has achieved to date:

- A high level map of OSCE activities that represents individual School processes

- Requirement specifications and engaged with commercial IT product demonstrations

- Standard setting workshop

- Policy guidance- Safety alert, Specific Provisions, Continuity guidance (for unexpected events during exams)

- Peer observation across the College

Conclusion

Feedback from the group has highlighted the benefits of inter-professional collaboration in a clinical assessment setting, enriching and strengthening the quality of OSCEs and related assessments. It has also identified the challenges of establishing an inter-professional education platform. Recommendations: Further areas of collaboration have now been identified, including development of innovative cross-discipline scenarios for pilot within OSCEs and an ongoing process of peer review of individual School assessments.

*Corresponding author Dr Elizabeth Metcalf, Cardiff University, School of Medicine, metcalfep@cf.ac.uk