Print


Author: Warren J. Cheung, MD, MMEd, FRCPC

Co-Author: Farhan Bhanji, MD MSc(Ed) FRCPC, FAHA

Co-Author: Nancy Dudek, MD, MEd, FRCPC

Co-Author: Wade Gofton, MD, MEd, FRCSC


Objectives

At the end of this chapter you will be able to:

  • describe the role of workplace-based assessment in Competence by Design (CBD)
  • explain the notion of entrustment and its importance in promoting residents in CBD
  • explain how faculty can teach others to use the entrustability scale

Introduction

Competence by Design (CBD) seeks to improve training by increasing observation and “assessment for learning” to ensure graduated independence and competence in training. In CBD, the teaching, learning and assessment of physician competencies are grounded in their daily practice environment. The idea is to encourage frequent low-stakes assessment opportunities that take place throughout residency training. The aim of frequent observations and feedback is to facilitate the gradual development of a trainee’s ability to safely perform the clinical and professional tasks of their discipline without supervision. The accumulation of these low-stakes observations provides information and data both to you as program director and to your program’s competence committee to determine a trainee’s development of competence as they progress through various stages of training.

As a program director, you know that assessment of competence involves more than simply testing knowledge. It requires observation and documentation of what residents “do” in clinical practice. This corresponds to the highest level of Miller’s pyramid and is the essence of workplace-based assessment (WBA) and observation.1 Nurturing and assessing resident competence requires that trainees be observed during many practice opportunities and that sufficient and specific information be gathered about trainee performance in the workplace.2–4 WBA facilitates trainees’ development because of its low-stakes nature, but it also contributes important performance data for progress and promotion decisions.

As a program director, you will need to consider the faculty development needs in your environment, to ensure that the clinical supervisors who perform WBA for residents in your program have the skills that they need to do this work. The information in this chapter, along with the resources in the references, will help you do this.

Entrustable professional activities — the organizing framework for workplace-based assessment

Entrustable professional activities (EPAs) are key tasks of each specialty discipline that can be learned, assessed and delegated in authentic practice environments. EPAs integrate the various CanMEDS Roles and are linked to multiple CanMEDS milestones. EPAs and their component milestones are developmentally progressive and aligned with each stage of residency training. Milestones may have their greatest utility in providing content upon which to design curriculum and focus feedback to learners when they have or have not achieved an EPA. It is clear that limiting the number of milestones for each EPA assessment and focusing faculty on the narrative component of the assessment may improve the quality of the coaching and documentation.5-7

EPAs are high-value tools for learning, but they also serve as summative benchmarks of the learning that has occurred. Formative teaching occurs in physician practice settings and involves multiple frequent workplace observations linked to timely feedback for trainees. This feedback is necessary to guide a resident’s learning progression. Documentation of the observation also provides essential information that competence committee members later use to help inform their recommendation about resident progress and promotion to the next stage of training. For each EPA, the respective Royal College specialty committee offers a recommended number of successful observations that support a determination of competence. Multiple practice observations across different contexts by different assessors over time provide a comprehensive image of a trainee’s practice ability, which is needed to make this decision.

How workplace-based assessment works on the ground

In CBD, you want to ensure that your faculty make use of authentic clinical supervision opportunities to engage in the WBA of each resident’s performance. Observation of trainee performance is followed by a conversation in which the clinical teacher offers specific feedback for improvement and actionable strategies to accomplish these improvements. This verbal feedback then gets recorded, so that it can contribute practice performance data to inform residents’ personal learning plans and the EPA achievement decisions of the competence committee . Frequent and timely coaching conversations between a learner and observer are a critical element of WBA in CBD.8 This is known as coaching in the moment, and it follows the RX-OCR process (Table 24.1):

Table 24.1 RX-OCR process

REstablish educational Rapport between the resident and the clinician (an educational alliance or partnership)
XSet eXpectations for an encounter (discuss learning goals).
OObserve the resident. With CBD, the role of clinical teacher is evolving from supervisor to frequent observer and coach. When clinical teacher directly or indirectly observe the work residents do more often, their observations provide greater learning opportunities and a more comprehensive image of trainees' competence.
CEngage in a Coaching conversation for the purpose of improvement of that work. As part of this conversation, the clinical teacher gives the trainee "coaching feedback." focusing on specific actionable suggestions for improvement and how such improvements can be accomplished
RRecord a summary of the encounter, including observation specifics and performance ranting, using an observation form.

Faculty development resource: RX-OCR module – This module can be useful as you start to design a faculty development plan. In addition, your local CBD lead or postgraduate medical education office may have resources to help in this domain. Don’t forget to collaborate with other programs, as the skill set required to perform WBA will be needed across all programs.

The roles of clinical faculty and the competence committee

It is important that your faculty understand that they are not responsible for making summative decisions about a trainee’s overall competence or EPA achievement; this responsibility is reserved for the competence committee. Rather, faculty play a coaching role by helping residents develop through frequent but thoughtful teaching observations coupled with actionable feedback and formative WBAs in practice settings. An individual EPA assessment can be thought of as the educational equivalent of a progress note: it’s only one step in the evolution of a resident. Decisions about competence are not based on this single assessment but rather on the trainee’s developmental trajectory and achievement of competence as determined by the competence committee (Table 24.2).

Table 24.2: Comparison of the roles of clinical faculty  and the competence committee

Clinical facultyCompetence committee
• Observe resident performance
• Provide coaching and feedback
• Record the context of the observation and the verbal feedback given
• Determines achievement of entrustable professional activities
• Makes summative decisions about residents’ overall competence and promotion

Teaching observations should be linked to EPAs that align with the resident’s current training stage and the routine practice activities of the clinical rotation. In many cases, it will be difficult to observe the entire EPA in a single encounter. In such cases, it is still valuable for the resident’s overall learning if the supervisor provides specific feedback on a portion or particular aspects of the EPA, and these assessments contribute useful data that will be reviewed by the competence committee. Some WBA tools have been intentionally designed to assess multiple EPAs simultaneously. These tools often take the form of a daily assessment form.

Tips on conducting workplace-based assessments

  • Supervisors have an obligation to create psychologically safe learning environments for feedback conversations to take place. It is vitally important that they consider power differential and various aspects of implicit, explicit, and structural biases and how they influence assessment and feedback conversations.
  • Both residents and supervisors can initiate authentic practice observations. This approach encourages residents to take responsibility for directing their learning and assessment. However, residents should not initiate all of the observations. Faculty should also initiate some of the EPA assessments as this adds to the reliability of the assessment process and the data produced. This in turn improves the validity of the data as decisions about entrustment are made.
  • Ideally, observations should occur frequently and be integrated into routine daily practice and workflow. When observation and assessment are normalized in the workplace, clinical faculty are more likely to provide concrete feedback on how the resident can improve and record honest assessments of resident performance.9 Residents are also more likely to perceive each observation as low-stakes.10 
  • Observation of resident performance may be direct (e.g., observation of a trainee performing a knee examination) or indirect (e.g., discussion with the residents about a patient’s treatment plan). Although direct observations produce optimal feedback, these are not always feasible given workflow demands and trainees’ greater independence as they progress.9 However, specific constructive feedback and valuable performance data can still be generated using indirect observation.

Documenting workplace-based assessments

WBA involves clinical supervisors documenting authentic observations in the workplace on a regular basis. Recording rich observation and feedback data is just as important as the verbal coaching that occurs, because it supports the formative and summative goals of WBAs. In their documentation of WBAs, clinical supervisors rate the degree of assistance that the trainee required to perform the task and provide a brief but rich narrative that describes the context in which the task was observed along with the coaching feedback provided.

The Royal College has developed national observation templates to help clinical supervisors to document WBAs. However, individual programs and schools may choose to adopt different WBA tools with entrustment anchors to fit their local practice contexts and needs.

Rating scales that use entrustment anchors

WBA of residents should reflect the priorities and clinical expertise of the assessor while striving to mitigate and minimize potential biases. Traditional rating scales are often anchored to a predetermined level of training (e.g., below, meets, above expectations) or describe the quality of the performance (e.g., poor, acceptable, excellent). However, these scales are subject to the rater’s own standard of expected performance, which varies from rater to rater. There are well-documented reliability and validity concerns with these forms of rating scales.11-14

In contrast, scales that incorporate entrustment anchors use the standard of competence or independent performance as the reference point for the top end of the scale.15 These scales are meaningfully structured around the way physician supervisors already make day-to-day decisions about trainee performance. These decisions are rooted in the rater’s judgments about how much supervision a trainee required to perform a task, with the eventual goal of training clinicians who are ready for safe, unsupervised practice.

Although many terms have been used to describe scales that incorporate entrustment anchors (e.g., entrustability or independence scales), these tools are conceptually similar in that they are behaviourally anchored ordinal scales based on progression of competence that reflect increasing levels of independence.12 Entrustment anchors have been demonstrated to be more reliable than traditional anchors.16,17 Experience with entrustment anchors has also demonstrated that trainees are willing to accept lower scores if paired with concrete and actionable feedback.18-20

The Royal College has adopted a particular set of entrustment anchors that have repeatedly demonstrated excellent reliability and the ability to discriminate between junior, mid-level and senior residents when applied to various clinical settings: “I had to do,” “I had to talk them through,” “I had to prompt them from time to time.” “I needed to be in the room just in case” and “I did not need to be there.”.18-23 Although entrustment anchors seemingly favour procedural-type teaching observations, these anchors can also be an effective means by which to assess non-technical skills, such as performance in an outpatient clinic.19,20,23

O-SCORE Entrustability Scale

LevelDescriptor
1"I had to do"
i.e., requires complete hands on guidance, did not do, or was not go
given the opportunity to do
2"I had to talk them through"
i.e., able to perform tasks but requires constant direction
3"I had to prompt them from time to time"
i.e., demonstrates some independence, but requires intermittent direction
4"I needed to be in the room just in case"
i.e., independence but unaware of risks and still requires supervision for safe practice
5"I did not need to be there"
i.e., complete independence, understands risks and performs safely, practice ready
Gofton WT, Dudek NL, Wood TJ, Balaa F, Hamstra SJ. The Ottawa surgical competency operating room evaluation (O-SCORE): a tool to assess surgical competence. Acad Med. 2012;87(10);1401-7. Reproduced with permission of the authors.

Faculty Development resource: Entrustment module – Provides examples of the entrustment anchors in action for procedural and non-procedural EPAs.

Tips on assigning ratings in tools for CBD workplace-based observation

Learning to use entrustment anchors can be difficult in the beginning, in part because observers are sometimes unclear about the implications of making certain judgements.  The following tips may help with their learning as well as their comfort levels.

  • The rating assigned should reflect the performance observed in that encounter; it is not a prediction of future performance. For example, giving a rating of “I did not need to be there” indicates that in retrospect the supervisor did not need to be present for the encounter that they just observed. It does not mean that the resident is authorized to do the clinical task independently going forward.
  • A rating of “I did not need to be there” does not mean the supervisor didn’t provide feedback during or following the observation. Although the trainee may have demonstrated competence, the supervisor can and should provide feedback to move them toward excellence or mastery (i.e., alternative approaches or opportunities to improve efficiency).
  • Distinguishing between “I needed to be in the room just in case” and “I did not need to be there” can often be challenging. When deciding between these two ratings, it may be helpful for the supervisor to consider the efficiency with which the resident performed the task as well as their observation of the trainee’s ability to anticipate and mitigate real or potential risks involved in the activity.
  • Each EPA observation should be low-stakes. Each observation reflects the performance of the resident on a particular task, under unique conditions and within a specific context, and it represents a single data point among many. No single rating determines whether the resident has, or has not, achieved the EPA. Summative decisions of EPA achievement and overall competence are determined on the basis of performance trends by the competence committee. Multiple data points across contexts, time and raters provide a more comprehensive image of a trainee’s practice ability.
  • Be aware of potential biases in assessment systems that can influence our interaction and engagement with learners during and after assessment.

The critical role of narrative comments

Following a teaching observation, trainees should ideally be provided with specific face-to-face verbal feedback about their performance. This information should also be well documented. Brief but well-written narrative comments are highly valuable information within any WBA.24 At the most basic level, narratives provide essential observation information that performance ratings cannot capture. These narratives document information about:

  • specific behavioural guidance provided to the trainee to improve their future practice performance
  • the context of the observation with sufficient detail to justify the performance ratings

Although trends in performance ratings help the competence committee track a resident’s progress, it is the narrative comments that provide the rich context of the observed performances necessary to make judgments of competence. These comments also provide the resident with data for personal reflection and creation of a personal learning plan.

Tips for improving narrative documentation in tools for CBD workplace-based observation

Faculty can be trained to provide high-quality comments and faculty development initiatives should focus on helping supervisors to do this.  The following tips may be useful:

  • Provide the context of the observation (e.g., setting, patient characteristics, case complexity).
  • Provide a rationale for the WBA performance ratings you assigned. Give enough detail for an independent reviewer to understand the trainee’s performance.
  • Provide suggestions for performance improvement, and do so in a supportive manner.
  • Consider power differentials in how language can be perceived. Ensure comments are constructive, action-oriented, and specific.

Strategies for dealing with challenges you’ll probably encounter with observers

Challenge 1: “I don’t have time to observe and document”

Whether they know it or not, your faculty are already in the habit of observing resident performance in the clinical environment; this is how residents receive feedback for improvement. Not all observation has to occur in real time. For example, observation of performance can take the form of reviewing a case with the resident and having them explain their management plan or reading through the resident’s note and corroborating their findings by seeing the patient yourself. When it comes to direct observation, the literature is clear that being intentional and planning opportunities for real-time observation increases the likelihood that it will occur. 9,25 This is best done at the start of the day with the resident. When it comes to documenting assessments, faculty will need to determine what works best for their workflow. For some, this may involve completing assessments in the moment, while for others it may work best to complete them at the end of the day. Some programs set a goal with a minimum of one documented observation per day. Ensuring that faculty and residents have quick access to the electronic platform in their practice setting is critical for facilitating documentation. It is also important to consider equity when discussing time/workload. For example, are all faculty who are concerned or overwhelmed provided access to time for the necessary tasks of supervision?

Challenge 2: “Residents only seek observations of strong performance”

It is human nature to want to perform well and be recognized. Nurturing a growth mindset does not happen overnight. However, there are several strategies that can help address the issue of selective observation. For example, it is important to set up your system so that both residents and faculty can initiate EPAs. Residents may not always be comfortable seeking feedback when they have performed poorly. However, observation and documentation in these situations is necessary to see growth, identify areas for improvement and ensure a breadth of clinical experiences. These benefits should be clearly explained to residents during their orientation and throughout their training to foster a disposition for seeking feedback. Faculty should also be oriented to your discipline-specific EPAs and be encouraged to trigger ad hoc observations. Sharing the responsibility of triggering and documenting observations in daily practice can help normalize the process of WBA for both faculty and residents.

Conclusion

Workplace observations are a critical component of a program of assessment within CBD. They serve as a stimulus for coaching and feedback as well as a means of collecting clinical performance data. Teaching observations should be linked to EPAs that align with the resident’s current training stage and the routine practice activities of the clinical rotation. In addition to the verbal feedback they provide to residents, supervisors should strive to document high-quality, rich narrative comments that provide sufficient detail to guide the resident in terms of their personal learning and also support the rating assigned. WBA ideally uses a variety of information, such as specific EPA or procedural observation forms, narrative observations, specialty- or program-specific (daily) assessment forms and multisource feedback tools. WBAs should be sampled broadly across different contexts and raters to create the comprehensive image of a trainee’s practice ability needed to inform decisions made by the competence committee. As a program director, you will see how important faculty development is in providing your faculty with the skills they need to provide effective WBA to support resident growth and development.

References

  1. Miller G. The assessment of clinical skills/competence/performance. Acad Med. 1990;65(9):S63–S67.
  2. Oerlemans M, Dielissen P, Timmerman A, Ram P, Maiburg B, Muris J, et al. Should we assess clinical performance in single patient encounters or consistent behaviors of clinical performance over a series of encounters? A qualitative exploration of narrative trainee profiles. Med Teach. 2017;39(3):300–307.
  3. Carraccio C, Englander R, Holmboe ES, Kogan JR. Driving care quality: aligning trainee assessment and supervision through practical application of entrustable professional activities, competencies, and milestones. Acad Med. 2016;91(2):199–203.
  4. ten Cate O, Hart D, Ankel F, et al. Entrustment decision making in clinical training. Acad Med. 2015;91(2):1.
  5. Cook DA, Kuper A, Hatala R, Ginsburg S. When Assessment Data Are Words: Validity Evidence for Qualitative Educational Assessments Acad Med. 2016;91(10):1359-1369. doi:10.1097/ACM.0000000000001175
  6. Ginsburg S, van der Vleuten CPM, Eva KW. The Hidden Value of Narrative Comments for Assessment. Acad Med. 2017;92(11):1617-1621. doi:10.1097/ACM.0000000000001669
  7. Govaerts M, van der Vleuten CP. Validity in work-based assessment: expanding our horizons. Med Educ. 2013;47(12):1164-1174. doi:10.1111/medu.12289
  8. Landreville J, Cheung W, Frank J, Richardson D. A definition for coaching in medical education. Can Med Educ J. 2019;10(4):e109–e110.
  9. Cheung WJ, Patey AM, Frank JR, Mackay M, Boet S. Barriers and enablers to direct observation of trainees’ clinical performance: a qualitative study using the theoretical domains framework. Acad Med. 2019;94(1):101–114.
  10. McQueen SA, Petrisor B, Bhandari M, Fahim C, McKinnon V, Sonnadara RR. Examining the barriers to meaningful assessment and feedback in medical training. Am J Surg. 2016;211(2):464–475.
  11. Carline JD, Wenrich M, Ramsey PG. Characteristics of ratings of physician competence by professional associates. Eval Health Prof. 1989;12(4):409–423.
  12. Kreiter CD, Ferguson K, Lee WC, Brennan RL, Densen P. A generalizability study of a new standardized rating form used to evaluate students’ clinical clerkship performances. Acad Med. 1998;73(12):1294–1298.
  13. van der Vleuten C, Verhoeven B. In-training assessment developments in postgraduate education in Europe. ANZ J Surg. 2013;83(6):454–459.
  14. Turnbull J, Van Barneveld C. Assessment of clinical performance: in-training evaluation. In: Norman GR, Van der Vleuten C, Newble D, editors. International Handbook of Research in Medical Education. Dordrecht: Kluwer; 2002:793–810.
  15. Rekman J, Gofton W, Dudek N, Gofton T, Hamstra SJ. Entrustability scales: outlining their usefulness for competency-based clinical assessment. Acad Med. 2015;91(2):1.
  16. Crossley J, Johnson G, Booth J, Wade W. Good questions, good answers: construct alignment improves the performance of workplace-based assessment scales. Med Educ. 2011;45(6):560–569.
  17. Crossley J, Jolly B. Making sense of work-based assessment: ask the right questions, in the right way, about the right things, of the right people. Med Educ. 2012;46(1):28–37.
  18. Gofton WT, Dudek NL, Wood TJ, Balaa F, Hamstra SJ. The Ottawa Surgical Competency Operating Room Evaluation (O-SCORE): a tool to assess surgical competence. Acad Med. 2012;87(10):1401–1407.
  19. Rekman J, Hamstra SJ, Dudek N, Wood T, Seabrook C, Gofton W. A new instrument for assessing resident competence in surgical clinic: the Ottawa Clinic Assessment Tool. J Surg Educ. 2016;73(4):575–582.
  20. Cheung WJ, Wood TJ, Gofton W, Dewhirst S, Dudek N. The Ottawa Emergency Department Shift Observation Tool (O‐EDShOT): a new tool for assessing resident competence in the emergency department. Burkhardt JC, ed. AEM Educ Train. 2019;4(4):359–368.
  21. MacEwan MJ, Dudek NL, Wood TJ, Gofton WT. Continued validation of the O-SCORE (Ottawa Surgical Competency Operating Room Evaluation): use in the simulated environment. Teach Learn Med. 2016;28(1):72–79.
  22. Voduc N, Dudek N, Parker CM, Sharma KB, Wood TJ. Development and validation of a bronchoscopy competence assessment tool in a clinical setting. Ann Am Thorac Soc. 2016;13(4):495–501.
  23. Halman S, Rekman J, Wood T, Baird A, Gofton W, Dudek N. Avoid reinventing the wheel: implementation of the Ottawa Clinic Assessment Tool (OCAT) in internal medicine. BMC Med Educ. 2018;18(1):1–8.
  24. Hatala R, MD Ms, Sawatsky A, et al. Using in-training evaluation report (ITER) qualitative comments to assess medical students and residents: a systematic review. Acad Med. 2017;92(6):868–879.
  25. Hauer KE, Holmboe ES, Kogan JR. Twelve tips for implementing tools for direct observation of medical trainees’ clinical skills during patient encounters. Med Teach. 2011;33(1):27–33.