Why Your Evaluation System Has Bias and Why That is Wrong….

Pharo Non-Profit Blog - Why Your Evaluation System Has Bias and Why That is Wrong

When I was doing graduate school back in the dark ages, I remember taking a class called the “History of Methods” and it was an eye opening experience for me. For the first time I got to see how inherently biased traditional research methods could be. Fast forward to today, where we are so much more aware of how unfair the world can be.

Despite this, over the 15 years that I have been doing field work, I have seen very few changes to the way we evaluate, despite our growing awareness of bias, systems and how discriminatory things are.

I still see many organizations approaching their evaluation work with same nonchalance that they always had. When we dive into the world of evaluation, whether it’s for programs, research, or even day-to-day decisions, bias is a sneaky companion that often tags along, sometimes unnoticed. Understanding bias and how it operates is crucial, especially in evaluation systems, where the stakes can be high and the impacts far-reaching.

This image demonstrates people of different races

What is Bias?

Bias, in simple terms, is a leaning or inclination that’s often preconceived or unreasoned. It can skew our judgment, leading to assessments or decisions that are not entirely fair or accurate. In evaluations, these biases can come from our personal experiences, beliefs, or cultural backgrounds, and can significantly influence the outcomes of our assessments.

Examples of Bias in Research

Let’s look at three examples where bias has notably influenced research:

Historical Biases: Throughout history, there have been numerous instances where research was tainted by blatant biases. A famous example is the ‘Tuskegee Syphilis Study,’ where researchers knowingly withheld treatment from African American men suffering from syphilis to study the progression of the disease. This was a glaring case of racial bias, leading to unethical treatment of participants.

Gender Bias: Another well-known example is the underrepresentation of women in clinical trials. For years, medical research was predominantly conducted on male subjects, leading to a lack of understanding about how different treatments affected women. This gender bias has had lasting impacts on women’s health care.

Cultural Bias: Cultural biases often manifest in research that’s based on assumptions from a particular cultural perspective, which may not be applicable or accurate when applied to other cultures. This can lead to misinterpretations and inaccurate conclusions about behaviors or phenomena in different cultural contexts. Its application is much more common. A hiring manager at a multinational corporation is looking through job applications. They come across a resume with a name that they find difficult to pronounce and notice that the applicant has listed experience with companies in countries they are not familiar with. Despite the applicant’s qualifications, the hiring manager subconsciously assumes that the candidate might not fit into the company culture and decides to give preference to applicants with more ‘familiar’ backgrounds. This is a form of cultural bias known as affinity bias, where the hiring manager is gravitating towards candidates who share similar traits or backgrounds to their own.

Subtle Biases in Evaluation: Moving on to more subtle biases, these can often be seen in institutional or organizational settings. For example, a large government grantor might use full-time employment as a key metric for success in their programs. However, for individuals facing barriers, such as those with disabilities or caretaking responsibilities, full-time employment might not be a realistic or desired outcome. This creates a bias in evaluation that overlooks the actual needs and goals of the program’s beneficiaries.

This image argues for inclusivity

Different Types of Bias

Halo Effect: The halo effect is a cognitive bias where our overall impression of someone influences our thoughts about their specific traits. For instance, if we like someone, we might also perceive them as more competent, even without evidence. In evaluations, this can lead to overly favorable assessments based on personal feelings rather than objective data.

Central Tendency Bias: This occurs when evaluators avoid extreme judgments and rate everything as average. It’s a safe play but can mask the true performance or impact of a program or individual.

Recency and Spillover Bias: This is when recent events or experiences unduly influence our evaluation. For example, if a program had a recent success, we might overlook past failures, leading to an imbalanced assessment.

Negativity Bias: This is our tendency to focus more on negative aspects than positive ones. In evaluations, this can lead to disproportionately emphasizing the shortcomings of a program or individual, overshadowing their successes or positive attributes.

Researchers themselves can carry bias that we need to “check” at the door when we are doing work. This can include:

  • Personal Beliefs and Values: A researcher’s own beliefs, values, and experiences can shape their perspectives and influence their approach to research, including the questions they ask, the methodology they choose, and how they interpret data.
  • Confirmation Bias: Researchers may favor information that confirms their existing beliefs or hypotheses and overlook or undervalue evidence that contradicts them.
  • Selection Bias: This occurs when researchers selectively include or exclude certain data or subjects in a way that is not random, potentially skewing the results.
  • Funding Sources: Researchers might be influenced by the expectations or interests of funding sources, consciously or unconsciously shaping their research to align with these interests.
  • Publishing Pressure: The “publish or perish” culture in academia can lead researchers to pursue trendy or positive results that are more likely to be published, sometimes at the expense of rigorous methodology.
  • Sampling Bias: This happens when the sample studied is not representative of the population from which it was drawn, leading to results that cannot be generalized.
  • Experimental Bias: Expectations about an experiment can lead to subtle changes in a researcher’s behavior, which can influence participants’ responses (also known as the experimenter effect).

These are just some ways in which researchers can exhibit bias. It’s important for researchers to be aware of these potential biases and take steps to minimize their impact on research outcomes. Tomorrow, we will discuss ways to mitigate different types of bias in our research methods and evaluation practices.

This image is to demonstrate the differences between the different types of bias we can find.

Closing Thoughts

In conclusion, bias in evaluation is a universal challenge which requires our constant vigilance. From the blatant biases of the past to more subtle instances in modern-day evaluations, it is crucial to recognize and address these tendencies. By understanding different types of biases and actively working to diminish them, we can ensure that our evaluations are fair, accurate, and truly reflective of the realities they aim to measure. Remember, acknowledging bias isn’t a sign of weakness; it’s a step towards more ethical and effective evaluations and representative results.

If you found value in this blog, we would love to hear from you. Please feel free to contact hello@pharononprofit.com to give us feedback, ask questions or leave your comments.
You can also access more content on this and other issues facing nonprofits by joining our free or premium memberships at: https://pharononprofit.com/join-now/


Posted

in

,

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *