Theme: Technology enabled assessment

Download a PDF of the abstracts: Technology enabled assessment

Using teenagers from local schools in Paediatric History taking OSCE stations

*Brooks R, Pickerd N, Powell C, Metcalf E.

Background

The GMC state that ‘effective communication between doctors and young people is essential to the provision of good care’ (1). Cardiff medical students have paediatric specific communication skills taught before they go out on placement (2).

Aim

To develop and evaluate an authentic and reliable OSCE station to assess the student ability to communicate with young people.

Method

Teenagers from local comprehensive schools were recruited via their drama departments to become simulated patients. Scenarios were developed requiring the student to take a history of a common paediatric symptom and formulate a differential diagnosis. The information was put together in one side of A4 that was accessible to a teenage audience and divided into information that could be given spontaneously when asked about the presenting complaint and information only to be revealed if asked directly. The teenagers were trained in a session at school run by a paediatrician and an actor skilled as a simulated patient as close as possible to the date of the exam.

During the station the student is asked to take a focussed history from the teenager about the presenting symptom. Following this the student is asked to present their differential diagnosis, the diagnosis they think is most likely and why, and how they would proceed to confirm this.

Feedback was obtained from teenagers taking part and examiners were asked for their views on the simulated patients as part of their station feedback.

Results

34 teenagers from 3 schools provided feedback after acting in one or two of five different scenarios and the experience had been positive for them. Many wanted to be invited back. Examiner feedback was positive. Reliability statistics were also positive.

Conclusion

Links with local school has allowed us to develop authentic and reliable paediatric communication skills OSCE stations. School inspections and exam periods can be barriers otherwise schools are keen to be involved each year.

*Corresponding author Dr Rachel Brooks, Cardiff University School of Medicine, brooksrm1@cardiff.ac.uk

 

Formative assessment at medical school using PeerWise: a mixed methods evaluation

*Walsh JL, *Wyatt K, Harris BHL, Denny P and Smith PE.

Multiple choice questions (MCQs) are ubiquitous in high-stakes medical school assessment. This has driven a demand from students for formative MCQs. However, medical school staff often do not have the time or incentive to produce formative material. Student-authored question banks such as PeerWise offer a potential solution. Answering questions is known to be valuable for learning via the direct effects of test-enhanced learning, but little is known about the value of writing questions for learning.

We introduced two cohorts on subsequent years to the student-authored question bank PeerWise, with one-hour introductory sessions. The first cohort (n=297) has been using PeerWise for two years, the second cohort (n=306) for one year. For both cohorts we examined: usage patterns; association between student question writing frequency and summative exam performance; and student perceptions of the value of PeerWise for learning, using focus groups and subsequent thematic analysis.

Over two academic years the two cohorts wrote 4671 questions, answered questions 606,658 times and posted 7735 comments discussing questions. In both cohorts question writing and answering activity rose exponentially prior to summative examinations. A directly proportional relationship was found between question writing frequency on PeerWise and summative examination performance.

*Corresponding author Dr Jason Leo Walsh, Cardiff University, walsh-jason@hotmail.co.uk

 

Very Short Answer Questions: A novel online assessment tool

Sam AH*, Field SM*, Van der Vleuten C, Wass V, Schupke K, Harris J, Meeran K

Background

Single Best Answer (SBA) questions assess recognition rather than recall. Open-ended questions assess the ability to generate an answer and are considered more valid, but their use is limited by resource-intensive marking. We developed an online assessment system that could efficiently mark open-ended Very Short Answer (VSA) questions.

Method

A 60-question formative examination was given to 299 medical students in SBA and VSA formats sequentially. The VSA questions were provided on a tablet with the same clinical scenario and lead-in as the SBA questions and a space to type a short answer. The VSA test was sat first by 155 students (VSA1/SBA2), whereas 144 sat the SBA version first (SBA1/VSA2). The results between the two cohorts were compared to assess reliability and validity. We evaluated the feasibility of VSA delivery and collected the students’ opinions to assess potential impact on learning behaviour.

Results

Two examiners reviewed the machine-marked VSA answers taking on average 1.36 minutes per question. Reliability was high: VSA1 (alpha=0.91) and SBA1 (alpha=0.84). The mean performance of the SBA questions in the two cohorts was similar (68.2% vs 69.7%, p=0.296). In the VSA1/SBA2 group, candidates scored significantly higher in the SBA2 (68.2%) versus VSA1 (52.4%).

*Corresponding author Dr Amir Sam, Imperial College London, a.sam@imperial.ac.uk

 

The Pattern of Social Media Use and Their association with Medical Students’ Learning

Professor Eiad AlFaris, College of Medicine, King Saud University, efarisx2@gmail.com

Background

Social media (SM) usage is expanding at a fast pace and it has deeply penetrated the university campuses. It is a popular method to communicate and collaborate among university students. However, what factors affect their learning and performance is not clear.

Objectives

The study aimed to assess the pattern, extent and reasons of SM use among medical students compared with the socio-demographic variables, and to investigate association between SM usage and overall academic grade.

Methodology

In this descriptive analytic, cross-sectional study, stratified sampling strategy was used. Data was collected through structured survey instrument. Survey Monkey plan was used for the data collection and analysis.

Study setting

King Saud University, College of Medicine, Riyadh, Saudi Arabia.

Results

The results show that 98% of medical students used social media. The most popular online SM was WhatsApp (87.8%). There was a statistically significant association between the male sex and YouTube (p=0.003) and Facebook (p=0.006) usage. Female students had a higher use of Instagram (p=0.001), Path (p=0.001) and Twitter (p=0.04) compared to their male counter parts. There was a higher use of WhatsApp (p=0.001) and Google+ (p=0.02) for learning purposes among female students. A statistically significant association (p=0.04) was found between the grades and checking SM during lectures.

Conclusion

Social media is very popular among medical students. YouTube and Whats-App emerged as the most frequently used in general and for learning purposes in particular. Our finding reveals valuable cautionary information about the impact of checking SM during lectures and the lower academic grades.

 

Enriching an electronic portfolio through learning analytics

* MSc, Annemarie Camp, Maastricht University, a.camp@maastrichtuniversity.nl

In workplace learning, student-learning opportunities often depend on specific work deployment and supervisor feedback. Within a medical setting, this means that the extent to which a learning opportunity occurs depends on the patient mix and the clinical supervisors (Billet, 2006). Consequently, learning opportunities may vary widely among students, making progress assessment difficult. For this reason, workplace learning is usually supported by assessment instruments that provide continuous, longitudinal and multi-faceted information on the development of the learner (Driessen et al., 2012). The results of these assessments are frequently collected in an electronic portfolio (EP). However, students find it difficult to gain access to information relevant for them, and to navigate through the portfolio data. As such, they often may miss important feedback. In this presentation we report on the WATCHME-project that aims at providing a solution to this by enhancing a portfolio with learning analytics (LA). LA are defined as the measurement, collection, analysis and reporting of data about learners and their contexts (Siemens & Gasevic, 2012) and provide the opportunity to offer an adapted, personalized learning environment (Greller & Drachsler, 2012). In this project, a learning environment providing personalized feedback in an EP on the basis of a student model is being developed. Using a personal student model, different types of personalized feedback are created and presented to the learner through:

- Just-In-Time Feedback: a written set of observations and suggestions for areas of focus identified from data available in the EP-system.
- Visualization feedback: graphical representations for displaying learner status and learning history over time.

This presentation focuses on the analysis of the user’s need, as a first step in the design process of personalized feedback. With our audience we shortly would like to brainstorm and discuss about the pro, cons and barriers of this workplace-based assessment technology for competency-based education.

 

LiftUpp: A Technology-enhanced Supportive Framework for the Development and Demonstration of Clinical competency and Capability

*Dawson LJ & Mason BG Introduction

The goal of education/training for health professionals is improving ‘competence’. However, ‘competence is not just about acquisition of knowledge and skills, but about the ability to create new knowledge in response to changing work processes’ (Govaerts et al, 2013). Thus a true demonstration of competence requires the measurement of clinical capability, through establishing the longitudinal triangulated consistency and quality of the performance across contexts and complexities. This necessitates an integrated approach to accreditation & QA, clinical & work-based-assessment, provision of feedback, examiner performance, technology use, and learning analytics. Unfortunately, each of these requisites are often considered, developed, and managed in isolation.

Approach

LiftUpp is a technology-supported learning design to develop and demonstrate professional competence, conceived by the University Of Liverpool School Of Dentistry in 2009. LiftUpp is able to continuously and longitudinally triangulate all the learning outcomes assessed across domains/contexts and provide detailed personalised feedback over performance (student and staff). The design is capable of displaying individual real-time data over developmental progress, in an unlimited number of outcomes, distributed between any numbers of stakeholders, across any number of sites. A web-based dashboard is used to drive development through learner-centred reflection based around the longitudinal quality and consistency of performance.

Outcome

Student development/progression is now informed through reference to around 4000 triangulated contextualised data points per student. LiftUpp has been well received by both staff and students and through using it as a ‘developmental-framework’ our NSS scores in ‘Assessment and Feedback’ have improved from 40% to being consistently over 90%. Crucially, we have been able to only graduate the demonstrable competent.

Conclusions

The use of LiftUpp, with its symbiosis of technology and pedagogy, has enabled the University of Liverpool Dental School to monitor and develop clinical competence. This approach is transferable to other disciplines.

* Corresponding author Ms Lucy Haire, LiftUpp Ltd c/o Liverpool IP, University of Liverpool, lucy.haire@liftupp.com