Introduction to Fairness, Bias, and Adverse Impact

Establishing a fair and unbiased assessment process helps avoid adverse impact, but doesn’t guarantee that adverse impact won’t occur. Establishing that your assessments are fair and unbiased are important precursors to take, but you must still play an active role in ensuring that adverse impact is not occurring.

When developing and implementing assessments for selection, it is essential that the assessments and the processes surrounding them are fair and generally free of bias. These terms (fairness, bias, and adverse impact) are often used with little regard to what they actually mean in the testing context. How do fairness, bias, and adverse impact differ?

First, we will review these three terms, as well as how they are related and how they are different. We then review Equal Employment Opportunity Commission (EEOC) compliance and the fairness of PI Assessments.

For a more comprehensive look at fairness and bias, we refer you to the Standards for Educational and Psychological Testing.

What is Fairness?

Fairness encompasses a variety of activities relating to the testing process, including the test’s properties, reporting mechanisms, test validity, and consequences of testing (AERA et al., 2014). We cannot compute a simple statistic and determine whether a test is fair or not. Instead, creating a fair test requires many considerations. It’s also important to note that it’s not the test alone that is fair, but the entire process surrounding testing must also emphasize fairness.

How can a company ensure their testing procedures are fair? 

First, all respondents should be treated equitably throughout the entire testing process. This means that every respondent should be treated the same, take the test at the same point in the process, and have the test weighed in the same way for each respondent. The test should be given under the same circumstances for every respondent to the extent possible. Importantly, if one respondent receives preparation materials or feedback on their performance, then so should the rest of the respondents. Respondents should also have similar prior exposure to the content being tested. For example, an assessment is not fair if the assessment is only available in one language in which some respondents are not native or fluent speakers. 

Next, it’s important that there is minimal bias present in the selection procedure. 

What is Bias?

Bias occurs if respondents from different demographic subgroups receive different scores on the assessment as a function of the test. This can take two forms: predictive bias and measurement bias (SIOP, 2003).

Predictive bias occurs when there is substantial error in the predictive ability of the assessment for at least one subgroup. This type of bias can be tested through regression analysis and is deemed present if there is a difference in slope or intercept of the subgroup. For example, a personality test predicts performance, but is a stronger predictor for individuals under the age of 40 than it is for individuals over the age of 40. This means predictive bias is present.

Measurement bias occurs when the assessment’s design or use changes the meaning of scores for people from different subgroups. At The Predictive Index, we use a method called differential item functioning (DIF) when developing and maintaining our tests to see if individuals from different subgroups who generally score similarly have meaningful differences on particular questions. If a difference is present, this is evidence of DIF and it can be assumed that there is measurement bias taking place. For example, imagine a cognitive ability test where males and females typically receive similar scores on the overall assessment, but there are certain questions on the test where DIF is present, and males are more likely to respond correctly. This suggests that measurement bias is present and those questions should be removed.

In our DIF analyses of gender, race, and age in a U.S. sample during the development of the PI Behavioral Assessment, we only saw small or negligible effect sizes, which do not have any meaningful effect on the use or interpretations of the scores. Similar studies of DIF on the PI Cognitive Assessment in U.S. samples have also shown negligible effects. 

Bias is a component of fairness—if a test is statistically biased, it is not possible for the testing process to be fair. However, a testing process can still be unfair even if there is no statistical bias present. It is important to keep this in mind when considering whether to include an assessment in your hiring process—the absence of bias does not guarantee fairness, and there is a great deal of responsibility on the test administrator, not just the test developer, to ensure that a test is being delivered fairly. Let’s keep in mind these concepts of bias and fairness as we move on to our final topic: adverse impact.

What is Adverse Impact?

When used correctly, assessments provide an objective process and data that can reduce the effects of subjective or implicit bias, or more direct intentional discrimination. However, the use of assessments can increase the occurrence of adverse impact.

Adverse impact occurs when an employment practice appears neutral on the surface but nevertheless leads to unjustified adverse impact on members of a protected class. Unlike disparate impact, which is intentional, adverse impact is unintentional in nature. Legally, adverse impact is defined by the 4/5ths rule, which involves comparing the selection or passing rate for the group with the highest selection rate (focal group) with the selection rates of other groups (subgroups). A selection process violates the 4/5ths rule if the selection rate for the subgroup(s) is less than 4/5ths, or 80%, of the selection rate for the focal group.

Adverse impact is not in and of itself illegal; an employer can use a practice or policy that has adverse impact if they can show it has a demonstrable relationship to the requirements of the job and there is no suitable alternative. This is the “business necessity” defense. 


  • American Educational Research Association, American Psychological Association, National Council on Measurement in Education, & Joint Committee on Standards for Educational and Psychological Testing (U.S.) (2014). Standards for Educational and Psychological Testing.
  • Society for Industrial and Organizational Psychology (2003). Principles for the Validation and Use of Personnel Selection Procedures.
  • American Educational Research Association, American Psychological Association, National Council on Measurement in Education, & Joint Committee on Standards for Educational and Psychological Testing (U.S.) (2014). Standards for educational and psychological testing.

To inform us of a typo or other error, click here. To request a new feature, click here.