The Changing Faces of Feedback: Challenges to our practices

David Boud
Emeritus Professor
Faculty of Arts and Social Sciences
University of Technology, Sydney, Australia

 

Feedback is the single aspect of higher education courses most criticised by students across countries and across disciplines. There have been many institutional attempts to improve this situation, but with little effect. In recent years educational feedback has become a focus for study and more fruitful ways of formulating the challenge of feedback have been developed. The focus will be on changes in the way feedback is being conceptualised, and the implications of these for assessment practices. A key element of this is a renewed emphasis on design of educational activities as if feedback were expected to have an effect on students’ learning.

David Boud is Director of the Centre for Research in Assessment and Digital Learning, Deakin University, Melbourne and Emeritus Professor at the University of Technology Sydney. He is an Australian Learning and Teaching Senior Fellow. He has published extensively on teaching, learning and assessment in higher and professional education. His current research focuses on assessment for learning in higher education, academic formation and workplace learning. He has been a pioneer in developing learning-centred approaches to assessment across the disciplines, particularly in student self-assessment, building assessment skills for long-term learning and new approaches to feedback (Feedback in Higher and Professional Education, Routledge, 2013).

 


Peering Through the Looking Glass:  How advances in technology, psychometrics and philosophy are altering the assessment landscape in medical education

André F. De Champlain, PhD
Director, Psychometrics and Assessment Services
Medical Council of Canada
Ottawa, Ontario, Canada

 

The science of assessment has undergone a number of changes that potentially not only alter the ways in which practitioners might conceive of evaluation but also impact the strategies employed to measure the competencies of candidates, from undergraduate medical education through physician revalidation efforts.   Advances in technology now permit the assessment of a broad range of competencies via computer as well as the automated marking of tasks. Bayesian networks capitalize on the strengths of several disciplines to model a host of outcomes, including assessment for learning and the impact of diagnostic feedback. Finally, programmatic assessment suggests a new paradigm which integrates both learning and assessment in a recursive fashion. This session will outline key activities in each of these areas.

André De Champlain, PhD, is Director of Psychometrics and Assessment Services at the Medical Council of Canada. He is involved in a number of projects at Council, including the review of current scoring and standard setting methodologies for MCC examinations in light of the new MCC Qualifying Examination blueprint, as well as several research studies aimed at better informing and supporting policy and current developments, both at the licensure and post-licensure levels. In addition, he is responsible for overseeing a number of innovative research areas at the MCC, including automated item generation as well as automated marking of constructed-response items. Finally, he is a chief contributor to efforts aimed at re-conceptualizing the LMCC program, in light of recent trends in medical education research and assessment more broadly. Dr. De Champlain previously spent nearly 15 years at the National Board of Medical Examiners, where he acted as lead research psychometrician for several USMLE examinations.

 


Assessment drives learning: How can assessment programmes be used to stimulate learning?

Prof. Dr. Janke Cohen-Schotanus
Emeritus Professor in Medical Education
Groningen University
The Netherlands

 

Faculties try to prevent student drop out and delay in study progress. Factors influencing study delay are diverse: student characteristics, curriculum characteristics but also the characteristics of assessment programmes. We all know that assessment drives student learning, however, little is known how assessment can be used to optimize student learning. Assessment programmes do not only determine what students study but also when students study. This presentation focusses on assessment programmes in undergraduate medicine. Several design principles of assessment programmes that influence graduation rates and drop out are discussed.

Janke Cohen-Schotanus has been a member and chairperson of various audit visit committees for medicine, human movement science and health science study programmes and has served on accreditation panels in the medical sector, both at university and professional master level. She owes her comprehensive grasp of educational matters to experience gained as curriculum developer and teacher trainer. In particular, Cohen’s domain know-how centres on her understanding of educational development in the medical field. She is also an expert in the area of testing, effectiveness and quality control.