Print


Author: Glen Bandiera, MD, FRCPC


Objectives

At the end of this chapter you will be able to:

  • describe best practices for designing a resident selection process
  • outline the steps in designing a selection process
  • list common pitfalls related to resident selection and actions to avoid each of them
  • outline the key elements of orientation for newly selected residents (including Competence by Design)

Case scenario

Author: Javeed Sukhera, MD, PhD, FRCPC

Your program is conducting virtual interviews. During a committee discussion, there are comments on one specific candidate referring to how they had “poor lighting” in their space and did not appear “professional.” A heated debate ensues among the committee. One member notes that physical appearance is an important element of professionalism, while another member argues that the comments on this candidate’s “poor light” are inappropriate. Another member of the committee turns to you and says, “you’re the Program Director, what do we do?”

You are aware that bias is pervasive in selection processes and remember that we must acknowledge and openly discuss biases that may influence decisions. You may want to use this discussion as an opportunity to discuss the importance of bias mitigation and highlight how the process is designed to promote structure and objectivity. You may also highlight how important it is to question our assumptions and challenge one another.

In this circumstance, you can role model for others that dissent and debate is healthy and welcomed. Creating an open culture for discussion can help mitigate bias. Although you may have your own perspectives on why comments on “lighting” for a virtual interview are problematic (e.g. biases that may favor candidates with certain physical appearances, bandwidth, cameras, privacy, etc.,), in such a circumstance you can ask your committee members to challenge their own biases and consider why associating “professionalism” with physical appearance can be highly problematic. Ultimately, you should also invite feedback into how to address such biases as part of future interview processes.

Introduction

With so many highly skilled and prepared candidates to choose from, you and your committee are likely to match very good candidates regardless of the process you use. But are you choosing the right candidates for your specific program? Are you treating the entire cohort of applicants fairly and equitably? Are you taking all reasonable steps to avoid inappropriate bias or inadvertent secondary consequences of your processes? In all sectors, who is chosen to join businesses, units or teams can determine the culture, processes, successes and outcomes for years to come, and residency programs are no different. Accordingly, resident selection is one of the most critical functions that program directors and committees must take on. For many, it is also one of the most rewarding and fun. Knowing that this is a high-stakes decision means that you probably spend a great deal of time thinking about and reflecting on your selection processes. It also means that you may be vulnerable to misinterpreting certain risks in an effort to “get it right” or be prone to unintended bias.

Proper selection requires a major investment of time, so it makes sense to focus on making high quality decisions. As program director, you should plan to dedicate a significant amount of time and energy (yours and that of others) to the selection process, including:

  • review and updating of program descriptions, tools and questions (6–8 hours);
  • committee review and discussion of proposed selection model (1–2 hours);
  • file review (1 hour per reviewer per file);
  • training/orientation of participants including anti-bias training, (2 hours per individual);
  • selection of candidates for interview (1–2 hours of deliberations and quality checks);
  • interviews (3 × 30-minute interviews × 2 interviewers per interview = 3 hours per interviewee) and
  • final ranking decisions and review and associated quality checks (2–4 hours).

Although the hourly estimates may vary greatly individual to individual, the list above does not include the time administrative personnel will need to devote to organizing it all.

With all of this in mind, this chapter is intended to help you design a comprehensive selection process and avoid some common pitfalls that arise when assessing and ranking candidates. Remember, however, that as a new program director you probably will not have to start from scratch. Your program probably already has a resident selection process in place; if you are lucky, it is a good process and you may not need to do much, if any, redesigning. As a first step, then, you should definitely consult with your selection committee to find out as much as you can about the existing process. Focus on asking about what is working well and where they think you can most help.

There are five generally agreed-upon principles that should guide a sound selection process2, derived from extensive experience and articulated in the literature in human resources, education and other fields.

First, you must clearly determine the attributes that matter and articulate these to all who participate in your process. Ideally, these would be explored by your selection committee and revisited annually.

Second, you should rely on a breadth and diversity of opinion and perspectives in making selection decisions. A well-constituted committee with equal weight given to each “voice” will produce a diversity of perspectives, evening out the “noise” generated by interrater variation and enabling a broader and independent assessment of a multitude of applicant characteristics.

Third, decisions should be based, to the degree possible, on a comprehensive understanding of the candidate’s past performance and demonstrated personality characteristics, values, and competencies. Although you will likely be rooting for some candidates and at times believe with good intentions that they may improve over time, it is important to be diligent in your screening and selection processes while approaching selection with humility and respect for the lived experience of candidates.

Fourth, you need to understand and clearly describe what constitutes legitimate grounds for decision-making. You must determine not only what matters but also what does not. This can be difficult to do because many committee members, letter-writers and candidates will focus on things that matter to them but not necessarily to the program or the committee. There are some attributes that are explicitly forbidden to be considered in selection decisions such as racial identity, gender, and family situation. Most of these factors are well-known and enshrined in legislation, but some are more subtle. It is important for program directors to understand the appropriate legislation, regulation, and policy pertaining to human rights, discrimination, and harassment. If candidates are asked about their family situation, debt load, country of origin, etc., it can have a negative impact on your program but also on the mental well-being of candidates. Therefore, it is also important for those involved in selection processes to be aware of their biases and avoid making inferences about a candidate’s suitability or interest on the basis of inappropriate details such as how many electives an candidate completed in an area or how much volunteer work they did.

Finally, you should strive for standardization at all points along the way. All candidate files should be scored against fixed objective criteria, interviews should be based on a fixed list of key questions (with interviewers being given the ability to explore areas the candidate brings up during the interview) and ranking decisions should be based on a predetermined process that relies heavily on previous assessments of the candidates and a careful consideration of the program’s needs, the overall profile of the candidate cohort (e.g., male/female balance) and any concerning or mitigating information that arises during the course of the selection process. It is also important to remember to consider how biases may become embedded in your objective criteria and processes. These can often be mitigated by ensuring that diverse perspectives and experiences are incorporated into establishing and evaluating selection processes. Throughout selection, there will be circumstances that cannot be handled within the confines of your defined selection process: your program should have a plan for how to consider such cases and when to seek advice from central authorities. Adhering to these five general principles will help you to design a step-wise approach to a defensible, systematic and reliable selection process.

Getting started

Any selection process must be built upon a solid foundation. As an accreditation standard, all institutions that sponsor residency programs must have a mission statement or equivalent outlining the place for medical education within the institution. Most departments or equivalent units within faculties will also have a mission, vision and/or values statement or a strategic plan, as will some divisions. These are all good places to start as you reflect on your selection process. Ultimately, your residency program will need to decide what it is trying to accomplish through the operation of the program. Are you seeking to attract and prepare residents to serve a specific population? Do you have a focus on leadership or research? Does your site/university/faculty have a specific resource that is unique in your area that you feel an obligation to exploit for societal good? All of these should inform your decisions about what type of graduates you want to see and, by extension, what type of residents you seek. Selection processes should align with a program’s resources, values and intent. Many committees either do not discuss this issue at all or tolerate varying opinions among committee members about what they are looking for, which creates a problematic source of interrater variability that can adversely affect the prospects of otherwise excellent candidates. Reflection and introspection about your program should culminate in a concise statement about the goals of your program, articulated in a leading statement in CaRMS (Canadian Resident Matching Service) (and any other) public resources.

Once you have decided on the overall goals of your program, you need to determine what type of candidate you feel will have the best chance of success.  Identify the key factors that are important and what supporting evidence a candidate can bring forward. These factors are likely to be program and specialty specific while inclusive of broad variables that may be universally sought such as high academic performance, interpersonal skills, etc.

Next, decide how you will weight or score each portion of the candidate’s application. There are three common ways to do this, each with pros and cons. The first way is to assign a weight or score to each component of the application (reference letters, transcript, etc.) and then assess, for each candidate, the strength of each component in relation to your factors of interest. The strength of this approach is that it enables you to weight each component of the application separately, on the basis of your views of the credibility and impact of each component, while also giving the assessors the freedom to use their expert judgment based on the criteria you have established. The downside is that a candidate may decide to concentrate their “evidence” in a different section of their application than you had expected (e.g., they may describe their volunteer experience in their letter rather than in their CV) and thus the score they receive for a particular component may not accurately reflect their merits. The second way is to assign weights to each factor of interest and then score each on the basis of the contents of the entire application. This approach allows assessors to seek evidence related to the factor of interest regardless of where it is found in the application file. The downside of this approach is that it makes it harder to standardize the impact of each application component (e.g., some assessors may find the reference letters more compelling while others may find the personal letter more influential). The third way is to take a more global approach and ask each assessor for a single overall score on the application, considering both the entirety of the application and the entire list of factors. This approach does allow for assessors to make one holistic assessment using their expert judgment; there is some validity in this, if the assessor is trained and highly experienced. What is lost in this approach, however, is the ability to oblige assessors to make a deliberate decision on each factor of interest, as well as the data that would otherwise be available to inform final ranking decisions to break a tie or create desired balance in the ranked cohort (e.g., a balance between research-focused residents and community-based residents). The recommended approach is the second option above- to seek objective assessments of each factor, rather than each component.  but each of the three approaches is justifiable. Your committee must make a deliberate and considered decision about which makes most sense for your program and communicate this widely.

5 tips for selecting residents

  1. Decide what your residency program is trying to accomplish through operation of the program.
  2. Decide what type of candidate you feel will have the best chance of success in your program.
  3. Decide what ‘evidence’ you will look for.
  4. Decide how you will weight or score parts of the application package.
  5. Decide on a priori decision-making and dispute resolution processes.

Assessing the applications and interviews

Candidates would ideally be assessed by a panel of assessors across the application review and interview phases. If you use a system where the interview is assessed independently from the application but scores from both the application and interview are used to generate the final ranking of candidates, it is important that the assessors doing the interviews not be the same people who scored the applications. Furthermore, the interviewers should not be party to information in the application. Only by creating this separation can the scoring of the applications and interviews be truly independent. If you “wipe the slate clean” after the application review and the final ranking of candidates is based only on the interviews, it may be appropriate to provide the interviewers with information about the candidate ahead of time (full application, only the CV, etc.). In this way a holistic view of the candidate still informs the final ranking. Both the application review and the interviews should involve multiple assessors. There are many valid ways this can done. It is less important to quibble over whether there should be two interviews with three assessors each versus three interviews with two assessors each than it is to ensure that multiple individuals are involved (in this example, both circumstances involve six assessors). If you have a small number of applicants, you may be able to use one assessment team for all applicants, which will generate the most reliable scores across applicants. If you have a large number of applicants, however, multiple teams will be necessary.

The literature suggests that at least three independent assessments of each of the application and interview are required to produce a stable score, as long as the instruments and criteria are standardized, and the assessors are properly trained. For interviews, it is considered best practice to use standardized questions and scenarios for all applicants. Although not the only solutions to many of these challenges, use of a Multiple Mini Interview model or a skills demonstration model using an objective structured clinical examination (OSCE) would enable you to adhere to the key principles outlined in this paragraph.

Throughout the assessment and interview process, it is important to consider how biases may adversely influence selection processes. Although a comprehensive review of this literature is outside the scope of this chapter, best practices include but are not limited to: encouraging reflection and discussion about biases, standardization, blinding interviewers to application data, and including diverse voices and perspectives as part of interviewing and assessment.

Ranking

To create the final ranking of candidates, it is best to rely on the system that you have spent so much time designing and trust your independent assessors. Candidates’ average score across all independent assessments of their application and interview is going to be the best indicator of their relative ranking. You may need to tweak your final ranking process for a couple of reasons. The first is that you may need to assign a “do not rank” status to certain candidates. No matter how well a candidate may meet all of your predetermined criteria, they may say, do or convey something that causes significant pause to you and your committee. These critical elements, which may involve interactions during social times or comments that a candidate makes while interacting with your team outside of the interview, may not be captured in your scoring rubric. You need a systematic way to allow concerns outside of the scoring rubric to be raised. The best advice is to consider these elements as grounds for a “Do not rank” decision rather than adjusting the candidate’s ranking downward because you want your team to focus on extreme and highly meaningful observations rather than getting bogged down in arguing over nuances and/or subtle behaviour quirks. The ‘litmus test’ question should be, “Would we rather risk getting an unmatched position than risk matching this candidate to our program?” If the answer is Yes, then assign a “do not rank” status. The decision to exclude a candidate from your list should not be taken lightly and should have clear and transparent justification that is discussed and agreed upon by a diverse group involved in selection rather than one person.

Another important consideration is to consider your program’s strategic aims and diversity. For example, if you are committed to rectifying a gender imbalance and the top 10 candidates for your three positions are all of the same gender (you might want to look at your process if this happens), you may want to adjust some of the top candidates with other candidates who might have scores that would otherwise exclude them from the list. Similar arguments can be made for including some candidates with a strong focus in an area of priority for your program (underserviced population focus, quality assurance interest, etc.). One way to limit the temptation to debate every candidate’s merits is to ask committee members to validate the “diversity” of the rank list and to have a predetermined approach to use if they cannot. Establish limits to how far any one candidate can move up or down a list and mandate that candidates within a target group cannot be reranked relative to each other. For example, if your committee advises inserting two more men into your top 10 to achieve gender balance, then insert the two most highly ranked men rather than argue about which two it will be.

Once you decide on your rank list, go celebrate, have a good night’s sleep and trust the process. Deflect any further questions or advocacy with reassurance that your system has been adhered to.

5 Pitfalls to Avoid

  1. Beware of the ‘false meritocracy’ when adding up candidates’ accomplishments.
  2. Be careful to fully separate assessments to avoid double-counting.
  3. Ensure all involved are aware of appropriate legislation and rules.
  4. Be careful in assessing ‘fit’, strive to be objective and avoid intrinsic biases.
  5. Avoid assuming candidates are interested or appropriate based only on number of electives done.

Challenges

Finally, some important pitfalls await even the most well-intentioned and organized program. Five of these are touched on here.

The first pitfall is that of the false meritocracy: those who have achieved success and have accomplished some key “achievements” may have done so not because they have abilities that others do not have but because they have had privileges unrelated to their abilities that have given them a leg up. A smart, insightful, hardworking and highly competent applicant may not have achieved the highest score on a standardized examination or amassed a significant number of hours of community service not because of a lack of ability but because they had to work two jobs to put themselves through undergraduate education and/or support a family rather than take two or three prep courses and spend a summer doing volunteer work. This applicant, in overcoming these competing demands, may be very well-suited to your program but overlooked if only key achievements are counted. Program Directors serve in an important role of leadership and influence. They must be able to help others recognize that a candidate that has not done electives with notable physician leaders may have come from a background that lacked connections in medicine or mentorship from family friends. This inadvertent bias in selection is hard to identify, which is why it is important not to set up an assessment system that relies simply on counting achievements. You should strive hard to seek to understand your applicants by examining their rationale for the decisions they have made and their ability to self-assess and self-direct on the basis of their experiences.

The second pitfall is the inadvertent false separation of assessments. As a stark example, if you set up a system whereby you weigh the application and interview scores at 50% each and then provide the interviewers with access to the application, you are almost guaranteeing that the application will count for more than 50% of the final rank because assessors cannot ignore what they read in the application and it will influence their assessment of the interview. If you truly believe that the interview assesses different things than the application review (if you don’t, then why do the interview?) then you should let the interview be assessed on its own merits.

The third pitfall is failure to respect external constraints, such as local human rights legislation or institutional policies. Make sure that all involved in the process are aware of these constraints. Avoid all questions and comments that impinge upon prohibited grounds for decision-making. If it is against the law to discriminate on the basis of a certain factor then do not even bring it up in discussion.

The fourth pitfall is the consideration of the “fit” of future residents with the program. Although it is important to consider the uniqueness of your program and calibrating processes and criteria to reflect these factors, “fit” can also be used intentionally or unintentionally to exclude certain candidates or have an adverse impact on equity, diversity, and inclusion. You must be cautious that your “fit” criterion is not used by committee members to focus on minor nuances to select individuals who are very similar to themselves or to those already in the program (including both their good and bad attributes). This is a concept known as affinity bias. There are ways to consider “fit” without seeking uniformity, however. Committee members should be trained to recognize hidden biases and share a collective commitment to professionalism, equity, diversity, and inclusion. When considering “fit”, assessors must think carefully about how they will assess candidates in this regard, challenging their own biases. Furthermore, if the issue of poor ‘fit’ comes up for a candidate, the discussant must be pressed to articulate which of the established criteria or values are relevant in their assessment; it cannot be used as a criteria itself.

The fifth pitfall is the assessment of elective experiences. Just because an applicant did a ton of electives in your field does not mean they are going to be a good resident. Remember that candidates will have several years in your excellent program to become specialists. You want residents who have taken charge of their learning, who have used opportunities to broaden their mind and ensure they are making the right career decision, and who know how to become well-rounded through experiences; it takes very little imagination to choose electives that are all in one field. Furthermore, doing a concentration of electives in one area does not guarantee that the candidate is a high performer, nor does it guarantee that they are still committed to a discipline after several experiences. Be on the lookout for those late bloomers who got turned on to your field only after experiencing it for the first time later in medical school as demonstrated through their more senior elective choices and personal statements.

Conclusion

Selecting residents is one of the most important, fulfilling and enjoyable tasks a program director will undertake with their committee. Employing a thoughtful approach that incorporates key design elements will increase everyone’s confidence in the process and result in a better outcome for your program and, ultimately, for society.

References

  1. Railey MT, Railey KM, Hauptman PJ. Reducing bias in search committees. JAMA. 2016; 316(24):2595–6.
  2. Bandiera G, Abrahams C, Cipolla A, Dosani N, Edwards S, Fish J, et al. Best practices in applications & selection: final report. Toronto: University of Toronto; 2016. Available from: https://pg.postmd.utoronto.ca/wp-content/uploads/2016/06/BestPracticesApplicationsSelectionFinalReport-13_09_20.pdf
  3. Hofmans J, Judge TA. Hiring for culture fit doesn’t have to undermine diversity. Harvard Business Review. 2019 Sept. 18. Available from: https://hbr.org/2019/09/hiring-for-culture-fit-doesnt-have-to-undermine-diversity?referral=03759&cm_vc=rr_item_page.bottom
  4. Williams JC, Mihaylo S. How the best bosses interrupt bias on their teams. Harvard Business Review. 2019 Nov.–Dec. Available from: https://hbr.org/2019/11/how-the-best-bosses-interrupt-bias-on-their-teams