This week for our blogpost on bias we are not going to talk about a single bias but, as we have already done on climate change, we will take a cluster of biases, in particular some of those that impact performance management.
Innovations in this area have been taking place regularly in recent years, in an attempt to create systems that are as fair as possible and that accommodate the learning and development paths of those involved: 360° rating systems, self-assessment crossed with the assessment of the manager, refined skills systems, consistency checks to focus on the collective dimension of assessment and moderate its subjectivity, increasingly precise KPIs, right up to the recent OKR goal-setting systems introduced by Google as an evolution of the MBO and continuous feedback… to name but a few. In reality, even well-designed performance management systems capable of grasping the complexity of organisational action remain anchored to a basic and spontaneous human activity: that of observation and the process of attributing meaning and interpretation based on these observations.
It is in this perspective that the increasing attention to the distortions and traps inherent in these processes becomes an interesting reflection for both the evaluated subjects and the evaluator. Awareness and transformation of the unconscious biases of individuals, but also coming from the organisational culture, becomes crucial for these systems to really serve to generate the individual and collective learning necessary to meet the challenges of the organisational context.
Let us try, in the following, to categorise some of these biases even if, as we shall see, forming precise categories becomes difficult and somewhat artificial since biases often regroup in the single evaluative act.
Biases related to identity factors of the evaluating manager or manager
– Identity bias (or Similar to me bias). It derives from the ancestral tendency to form relational subsets, “in-out groups”, according to characteristics actually possessed or projected onto others, which make them feel similar or distant from us. Belonging to one group or another is a strong identity factor. The subject perceived as similar to us is therefore better evaluated and managed than the subject perceived as ‘different’. Numerous studies show that gender, ethnicity, educational background, religion and age are among the “in-out group” factors that have a strong impact on evaluation. This bias manifests itself, in a favourable evaluation for those who feel similar, also in the communication of the performance evaluation, through, for example, a use of the pronoun “you” to distinguish those who are perceived as out-group and “we” for those who are “in-group”, with impacts on the sense of organisational belonging, the feeling of being recognised, and motivation. It is is also important to mitigate this bias, that diversity is represented in all hierarchical levels.
– Attribution bias (or opportunity bias). This is the tendency to attribute successes to us and our abilities and failures to bad luck or causes external to us. This tendency is reversed in the case of assessed subjects for whom the opposite happens: a good performance when this bias is in action is attributed to luck or favourable conditions in the context and, for a bad performance, only the person’s inabilities are highlighted. This bias, combined with the identity bias, can generate a systematic good or bad perception of the assessment, attributing to some only merits and to others only the intervention of fate and vice versa.
Biases linked to the use of rating scales
– Leniency Bias. The manager uses the rating scale in a systematically generous way.The indulgence may be higher for some employees (see bias above) but may also be more generalised. Behind this bias there are meta-models of description of reality in the evaluator, such as “I need to be loved or cherished and if I evaluate realistically I will not be loved or cherished anymore” or “I evaluate generously to signal encouragement so the person will do better”,The indulgence may be higher for some employees (see bias above) but may also be more generalised. Behind this bias there are meta-models of description of reality in the evaluator, such as “I need to be loved and if I evaluate realistically I won’t be loved anymore” or “I evaluate generously to signal an encouragement so the person will do better” or “if I evaluate negatively a performance then I will have to face a conflict and it scares me” and a distorted idea of “kindness”, which does not take into account that the objective of management and evaluation is not to punish but to generate learning in the evaluator and in the evaluated.
– Severity bias.The manager systematically evaluates more severely. The mental models behind this systematic error may be, for example, “I’ve paid my dues, now the person being appraised has to pay their dues”, or “if I use high values then the person will not work hard”, etc. Numerous researches have been carried out to link personality traits (e.g. detected with the BigFive test) and systematic errors in the scales, e.g. linking traits of emotional stability and extroversion to lenient use and vice versa. Interesting results emerged from recent research on the link between generous or severe use of the scales and, once again, identity characteristics of the person assessed, which highlighted the risk of greater assessment severity towards dominated groups (women, people of colour, LGBT+, cognitive diversity etc.). Another interesting aspect on this topic is the use of scales in self-assessment linked to the famous “impostor syndrome” that consists (also) in a systematic error of severity in self-assessment that produces a feeling of inadequacy and illegitimacy in the person.
– Central tendency. Especially on odd-numbered scales, tendency to use only the central values and not the whole scale, in order to avoid taking full responsibility using the extreme values.
Biases related to the partial focus on the performance of the assessed.
– Positive and negative halo effect. The halo effect, one of the first biases to be studied, occurs when a positive or negative part of the performance is focused and emphasised, so that the whole assessment is affected. For example, John has very high skills in customer negotiation, contract closure, team management, but rarely speaks up in meetings. His manager might, based on this last characteristic, evaluate him negatively on the whole performance. I have taken the example of “speaking up in meetings” also because, according to some research, there is a positive halo effect affecting those who are good at speaking up in public. The halo effect may be even wider and concern not so much a part of the performance but characteristics of the person, in particular attractiveness, enthusiasm, positivity that are associated with effective performance, going so far as to conceal non-positive results.
– Recent memory bias (or availability bias). It consists in the belief that an event that happened recently is more likely to happen again. Hence, with respect to performance management, the tendency to recall mainly the last three or four months of performance and leave the rest of the year in the shadows. A curious effect of this bias is the so-called “hot hand”, a metaphor taken from sport where a tendency to pass the ball more frequently to people who have scored a point has been studied, in accordance with the belief that one success can easily be followed by another (and reconfirming this belief because greater possession of the ball creates more opportunities to score points). In the business environment this effect produces the assignment of interesting and challenging projects one after the other to people who have been successful in one project, recreating the conditions for another success. A good way of counteracting this bias is through continuous feedback systems or the OKR methodology as a whole.
– First impression effect. Contrary to recent memory bias, this effect anchors us to the first general impression we had of the person and makes us revert to the judgement we formed in the first few seconds of the relationship, regardless of the results the person actually achieved. So a good first impression can hide negative performance and a bad first impression produces the opposite results. In a future post on bias we will talk about the famous Harvard research on “warmth & competence”.
Comparison Biases
– Contrast effect. One of our ways of learning, as human beings, comes from comparing information to analyse its differences and similarities. This routine of thinking, when applied to performance management, distracts us from the object of our observation – the relationship and results of an individual, in relation to his or her objectives – and moves us to comparisons with other members of the organisation, or between members of the same team. Performance is thus assessed not for the added value on objectives given by the individual, but as better or worse than other team members.
– Job vs Individual bias. In most organisations, there are mental models that lead to a focus on certain roles, which are perceived as more contributory to the production of results, than others. I am thinking for instance of research roles in hi-tech companies or sales and marketing roles in consumer companies (where we happened to hear these two functions referred to as “la voie royale”). This bias consists in favouring, in performance management, those roles that intervene in the functions perceived as having the highest added value in the company, negatively impacting the sense of fairness, by evaluating in a worse way roles considered as minor.
At the end of this roundup, the evaluator may feel a little uneasy 😊. We offer some ideas to try to contain this bias.
- We can’t say it enough, but the more awareness we have of how we think and the processes that lead us to frameworks for action, the more chance we have of finding the biases and errors. This means helping our rational side to participate in the process as much as possible. Performance management tools are also made for this, to take the evaluative activity out of spontaneity. Awareness needs to take the time, and this is another key factor. Evaluations done in a hurry, at the last minute, in a ritualistic way just to fill “the report card” (in how many organisations have we heard this term again!!) are the breeding ground for bad evaluations. A good appraisal creates the conditions for better performance in the next period, so it is not a cost in terms of time but an investment in the future and in creating a good team climate.
- Having a performance management and development system that is as articulated as possible, with properly written objectives, truly relevant measurement indicators, competencies described clearly and factually, multi-channel feedback is an important part. But, as mentioned above, no system can be completely bias free, e.g. those who designed it.
- An interesting device is the consistency check. At the end of the evaluations, the and the peer managers get together to tell how they came to place people on the scale. The peer managers and other participants in the meeting challenge the assessment through counterexamples, questions about specific behaviours observed etc. It is an good solution and the consistency checks we have witnessed have been great learning moments. As long as people play along and are willing to see not only each other’s biases but also their own and work on the organisational level, asking themselves for example “what are we not seeing, because of habits, routines, ‘this is how we do it’?”
- When you have a lot of team members it is good not to do evaluations all at once. If you reread the list of mistakes above, it is clear that if you add to these an unspecified number of evaluations made all in one afternoon, it becomes very difficult to know who did what.
- Continuous feed back becomes a very good tool, especially when it is possible that the evaluator and the evaluated can agree on its content and keep a common record of it. The advantage, apart from the evaluation, is above all on the learning cycle of the person, which is thereby enhanced.
- A culture of acceptance and growth through error helps the assessor and the assessed to open a dialogue, in which the relationship is protected from the risk of the “one story”. The eye (and brain) of the evaluator are not infallible. There are (at least) two versions of the story and, through a clear and circumstantial exposition of the facts, both can perhaps enrich the reconstruction that has been made.
Phote credit Rob Gonsalves