2020-01-09 HR Examiner uncoded bias in AI hiring photo img cc0 via unsplash by charles pcZvxrAyYoQ 544x816px.jpg

“Allowing machines to participate in and perhaps even dictate decisions in Human Resources raises a host of ethical considerations. After all, these systems control various forms of opportunity for the workforce. More precisely, they involve people’s livelihood, hopes and dreams.” - John Sumser

Uncoded Bias


Just as humans have unconscious bias, machines have uncoded bias. Machines are incapable of seeing things that aren’t measured.


For example, it is entirely possible to increase the number of job offers extended to female candidates without ever once offering a job to a female with a Body Mass Index over 25. Small hints of social class unmeasured in the core data can prompt a skew within a given category.


So, while it is true that a machine can do a better job at relentlessly sticking to a narrow script, it cannot see or understand things that are not in the data. Unlike people and their unconscious biases, machines can only change their approach with new measurement and new coding. In other words, while machines may be able to address small components of unconscious bias, they cannot address all (or even most) of it.


The Tendency Of Data Models To Skew (Bias)


Intelligent tools take variability as input and produce stability as output. Even if you aren’t literate in statistics (and you might want to get there), you’ll recognize the idea of ‘regression to the mean.’ It is the tendency of measurement to reproduce itself. It is how data models wear out. Once the model comes out of the lab, it is placed in its regular data flow. Unless there is a definitive effort to continue to introduce variability into the system, the model will discover and eradicate the variability. That leaves a static organization focused on the average. While useful as an aspiration (everyone wants to hit their KPI), it’s rare that a data model is built to consider going beyond the KPI.


In practical terms, intelligent tools tend to become enforcers of policies rather than discoverers of value. For example, sifting a candidate pool to remove anyone who meets less than 80% of the job’s requirements eliminates the possibility that you will find a Bill Gates or anyone with potential and ability to learn new things instead of prior experience. Said another way, data models are great at producing predictable results. They are not very good at delivering to a constantly changing target. This is partly because they are validated by looking backwards. Perhaps the single most important principle in business is the idea of continuous improvement. There is good reason to believe that current ideas of data models inhibit continuous improvement. Intelligent tools wear out when they stop creating new value.


Unmeasured Bias


It’s the piece you miss that hurts you most. Earlier, we compared unconscious bias in humans to uncoded bias in machines. Both notions refer to complex, parallel problems. In both cases, the problem involves unknown unknowns. (It was Donald Rumsfeld who said “There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.”


Neither machines nor humans are great at knowing what they don’t know. Humans are much better at understanding and accounting for it. They work to uncover their biases. The process of disrupting technology and commerce is rooted in discovering the assumptions that limit growth and innovation.


Machines, on the other hand, are best at solidifying processes and procedures. Google Maps can always tell you which route is faster. It can never tell you which route is better (prettier, more fun to drive, better for gas economy, or other important variables). It doesn’t have the data with which to make such an assertion.


Machine decision inputs are always biased towards the goal. Human decisions are more open to unanticipated variables. Constraining decisions exclusively to a singular goal introduces the kinds of bias that come with any set of blinders. Inflexibility, narrowness and lack of agility are the consequences.


Three aspects to the ethics of bias in AI and intelligent tools


  1. It is critically important that we understand the degree to which our biases impact opportunities for protected classes. The law requires this. There is much work to be done to expand access to employment opportunity for all people. It is better understood as a journey than a destination.

  3. Equally important is understanding the ways in which our biases inhibit the growth and flexibility of our organizations. Unconscious bias about business purpose, methods, intent and practice are the things that make our companies vulnerable to competitors. Again, this is an ongoing critical function.

  5. Finally, as we introduce machine intelligence into our operations, we are introducing a new kind of bias: uncoded bias. Just as we must constantly reassess our biases in employment and business practice, we have to do the same with machines.


Allowing machines to participate in and perhaps even dictate decisions in Human Resources raises a host of ethical considerations. After all, these systems control various forms of opportunity for the workforce. More precisely, they involve people’s livelihood, hopes and dreams.

Read previous post:
2021-01-18 stock photo img cc0 by AdobeStock 336905311 no arrow 01 544x335px sq 200px.jpg
Q&A on AI and Intelligent Tools with John Sumser

Watch this video Q&A with John Sumser on AI and Intelligent Tools in the HR and Recruiting space during the...