2020-12-17 HR Examiner article John Sumser Bias in AI Are People the Problem or the Solution stock photo img cc0 by AdobeStock 61589445 544x420px.png

“All tools contain embedded biases. Bias can be introduced long before the data is examined and at other parts of the process. Meanwhile, one group says people are the problem; the other sees them as the solution.” - John Sumser

 

Bias in AI: Are People the Problem or the Solution?

 

With bias one group assumes that people are the problem; the other sees them as the solution. The human group assumes that people should be the decision makers when lives and careers are affected. The tech group tries to eliminate or bury bias so it can’t be seen. The human group illuminates bias so that people can make the choice. The tech group assumes that things work better when humans are not involved. Meanwhile, bias related technical tools fall into a couple of major categories.

 

The tech group includes:

  • Assessment tools that ‘reduce or eliminate’ hiring bias
  • Search tools that redact information that can trigger bias
  • Matching tools that improve talent pool diversity
  • Structured interviewing tools that constrain and drive questions

 

The human group includes:

  • Vendors whose tools illuminate bias in behavior and offer alternative choices.

 

Both kinds of tools can be applied as a part of a larger cultural transformation process. While any individual tool may not change or solve the issue, the fact that the organization is using these tools and working and taking action to mitigate bias is a powerful message that can be the beginning of a larger cultural change.

 

The ‘can technology solve bias’ question is an important part of a larger conversation. Using intelligent tools can help us more clearly understand the variables involved in building businesses that value merit and contribution over monoculture. Interestingly, it may be that the outcomes delivered by intelligent tools are less important that the questions that they raise.

 

Creating Bias

 

There is a separate and arguably larger bias question. This one involves understanding whether the machine is introducing bias in some way. Where the first question asked, ‘how do we use intelligent tools to reduce unwanted organizational bias?’, this question concerns the role of machines in causing or amplifying bias. They are related but separate topics.

 

Every intelligent tool comes embedded with the biases of its creators. In the best of all possible worlds, the tool represents the limits of the views and experiences of the team members involved in the project. In the worst case, the tool was a less than perfectly executed subtask of some larger, more important and more interesting project. Just like construction quality can vary within a housing development, intelligent tool construction can have varying levels of quality.

 

Whatever the case, all tools contain embedded biases. Bias can be introduced long before the data is examined and at other parts of the process. There are three primary ways in which bias is introduced into a computer model:

 

  1. Framing the problem: Every intelligent tools project begins with a clear definition of what is to be accomplished. In the case of job matching, this is often something like ‘find me more resumes like this’ or ‘find resumes that meet these job requirements.’ In assessments, the problem definition is more complex and involves sorting individual transactional data into a predetermined framework. In many cases, the model looks like, ‘here are our top performers, we want more like that.’
  2.  

  3. Collecting the data: With resumes, data collection involves accumulating quantities of documents with varying formats. In assessments, it’s the instrument, game, or other simulation. In employee and engagement surveys, it’s the development of bias-free questions. In each case, the data collection method invariably introduces bias into the data. The design of the data collection method can be assessed for the kinds of bias it might introduce. It can only be actually seen as a reflection of the data it collects and is difficult to spot.
  4.  

  5. Cleaning and preparing the data: Whether done by machine (in free text using NLP), formally created with a collection instrument, or scavenged from existing sources, all data requires cleaning, usually an enormous amount of cleaning. From granular corrections of abbreviations to a single term (e.g. standardizing L, L., LN, LANE, Ln, Ln., lane, and Lane) to creating categories for free-text entries in an ‘other’ box, the work of data cleaning involves detailed attention to the data so that the machine can use it.

 

When a company says that it can reduce or eliminate bias in a model, they are not making a claim about all of the bias that might be there. They are always making a claim about specific kinds of bias, usually those regulated by law. The processes involved in model design and execution include an infinite array of places for bias to seep in.

 

There are a host of readily available tools in open source that can be used to monitor the output of digital processes to see if the final output is biased. Ensuring that the data flow is clean at a legal and social justice level is extremely important. Success in those areas is not bias elimination, it is legal compliance.

 

Benedict Evans, a partner at VC Firm Andresson-Horowitz, puts it succinctly:

 

  • “Machine learning finds patterns in data. ‘AI Bias’ means that it might find the wrong patterns – a system for spotting skin cancer might be paying more attention to whether the photo was taken in a doctor’s office. ML doesn’t ‘understand’ anything – it just looks for patterns in numbers, and if the sample data isn’t representative, the output won’t be either. Meanwhile, the mechanics of ML might make this hard to spot.”
  •  

  • “The most obvious and immediately concerning place that this issue can come up is in human diversity, and there are plenty of reasons why data about people might come with embedded biases. But it’s misleading, or incomplete, to think that this is only about people – exactly the same issues will come up if you’re trying to spot a flood in a warehouse or a failing gas turbine. One system might be biased around different skin pigmentation, and another might be biased against Siemens sensors.”
  •  

  • “Such issues are not new or unique to machine learning – all complex organizations make bad assumptions and it’s always hard to work out how a decision was taken. The answer is to build tools and processes to check, and to educate the users – make sure people don’t just ‘do what the ‘AI’ says.’ Machine learning is much better at doing certain things than people, just as a dog is much better at finding drugs than people, but you wouldn’t convict someone on a dog’s evidence. And dogs are much more intelligent than any machine learning.”

 

Bias is the noisiest of the ethics questions facing HR

 

Ethical questions about how we use data and technology in organizations are growing in prominence and importance. Bias is the noisiest of the ethics questions facing HR. Beyond bias there is privacy, trust, liability, labeling, transparency, design, maintenance of models, management of tools, consequences of the technology, and applicability of generalized models.

The overarching point is that ethics will become a central component in 21st century management. Operating a workforce with a blend of digital and human workers requires different emphases in decision making. We will be able to accomplish amazing things with our teams, but the way we do that will change.

 

For a deeper look at the bias question, see:

  • This is how AI bias really happens—and why it’s so hard to fix by Karen Haoarchive, MIT Technoloy Review Read
  • AI bias will explode. But only the unbiased AI will survive. by IBM Read
  • Notes on AI Bias, by Benedict Evans Read
  • Design Justice: an organization that creates design processes that center on those most likely to be adversely impacted by them Read
  • Trust Thou AI? In AI Strategy and Policy, by Kim Larsen Read
  • All automated hiring software is prone to bias by default by Erin Winick, MIT Technoloy Review Read
  • All the Ways Hiring Algorithms Can Introduce Bias by Miranda Bogen, HBR Read