When it comes to AI-based hiring assessment tools, we are presently at the bottom of the Technology Adoption Curve staring upward. Those brave enough to be “innovators” and “early adopters” must accept that emerging assessment technologies are far from perfect.
Yes, innovation in AI-based assessments is happening, but the pace is slow. Massive sums of money are being invested because automating predictive hiring decisions while increasing their accuracy will result in billions if not trillions in ROI.
Unfortunately, assessments are subject to the same factors that are limiting the advancement of AI in many areas. Chief among these is the fact that machines are not good at using abstract reasoning to perform complex-judgment-based tasks and infer deeper levels of meaning from data.
As flashy as AI may seem, here in 2018, even the most advanced AIs are nowhere close to replicating the abilities of the human brain. One of the core functions of the human brain is the ability to understand other humans and ascribe meaning to their behaviors. This requires a higher-level analysis of cognition that can see the whole from the sum of its parts. What we are essentially talking about here is the science of psychology.
Human psychology is based on the idea that there are many factors that make each and every individual unique (i.e., “individual differences”). Psychology seeks to study and measure these individual differences in order to understand the reasons for human behavior and make predictions about what individuals will do in certain situations.
When it comes to hiring, the measurement and interpretation of individual differences is essential to predicting if an applicant will be successful or not. Psychologists create hiring assessments specifically for this purpose. For hire-bots to be able to do their job as well or better than humans they are going to have to understand individual differences the same way that humans do. In other words, to be truly game changing, hiring assessment AIs are going to have to think like psychologists.
How Psychologists Measure Individual Differences
Psychologists measure individual differences using a very specific process that centers around the interplay of two things: psychometrics and expert judgment. Psychometrics is:
“The field of study concerned with the theory and technique of psychological measurement, which includes the measurement of knowledge, abilities, attitudes, and personality traits. The field is primarily concerned with the study of differences between individuals. It involves two major research tasks, namely: (i) the construction of instruments and procedures for measurement; and (ii) the development and refinement of theoretical approaches to measurement.”
Psychometrics provides the tools needed to ensure confidence in the accurate and reliable measurement of the personal attributes that make us who we are (i.e., our personality, attitudes, mental abilities, etc). In psychometric science these attributes are referred to as “constructs”– defined as: ..”a proposed attribute of a person that often cannot be measured directly, but can be assessed using a number of indicators or manifest variables”. Measuring constructs requires a set of expertly constructed tools that can “see below the surface” of a person in order to understand their individual differences.
Psychometrically sound construct-based measures of individual differences are a non-negotiable factor in the construction of quality hiring assessments. When creating predictive hiring tools, trained experts (usually I/O psychologists) rely on their judgment to determine the specific constructs required for success at a given job and then configure psychometric tools to evaluate job applicants relative to these constructs.
Machines and Their Understanding of Individual Differences
When it comes to hiring, AI does not rely on individual differences because it is not capable of understanding them. While empirically derived hiring tools have tremendous value in situations that have relatively rigid rules, they are not suited to the complex judgments required to infer meaning from a pile of data.
Instead of making expert predictions based on psychometric measurement of constructs, AI essentially ”learns” from patterns in data and applies that learning to making predictions on new data sets. For instance, AIs can look at a data set such as a resume to identify words and patterns of words that may predict an individual’s potential for success at a given job. Such AIs can be “trained” on a dataset that is based on the resumes of high-performing incumbents. The AI will simply compare the pattern of data on the resumes of applicants to those on the resumes of successful employees. While this method definitely has potential to be effective, it represents a relatively atheoretical, data based approach to the development of predictive model is known as Dustbowl empiricism.
So, while AIs can be trained to recognize “features” that may predict human behaviors, they cannot cannot generalize this information in a way that equates to the psychometric measurement of constructs. Without this understanding AIs are unable to pick up on individual differences. This inability to understand people through the lens of individual differences means that AIs make lousy psychologists.
Thinking Like a Psychologist
Even though we are putting a ton of resources into building AIs that can predict job success using big data sets, the psychometric measurement of individual differences is still critical to hiring assessments because:
- Current hiring AIs are not up to the task. We are still in the early days when it comes to AI based hiring technologies. Fully automated Hirebots are not yet able to best the tried-and-true model of measuring individual differences using psychometric tools.
- Hiring AIs do not generalize well. AIs have issues generalizing across situations. They are very specific to the data set they were trained on. Once the AI is turned loose on a different set of data, it may struggle to replicate the results. This is referred to as “overfitting” a model and it can be a significant problem when applying an AI in new situations. Empirically keyed models such as those used by AIs also tend to destabilize over time and must be retrained on a regular basis.
- AIs present risk. While using AI for hiring assessment is definitely not illegal, it is required to meet the same standards as all hiring assessments (as put forth in the Uniform Guidelines on Employee Selection Procedures and the SIOP Principles for the Validation and Use of Employee Selection Procedures). If an AI was not created based on a job analysis that clearly outlines the traits required for job success and if it cannot be proved that the AI measures these constructs directly, there is risk involved from a compliance standpoint. This is especially true in the case of the complex neural networks that make up Deep Learning AIs.
- AIs can introduce bias. AI-driven approaches to pattern recognition and decision making that are empirically driven can lead to the introduction of bias. (Also see “Fixing Bias in AI.”) Machines can be trained to avoid these biases, but the danger still exists. Such is the nature of a highly advanced black box that learns in ways we cannot fully explain. Thankfully, many assessment providers that are using AI are doing the right thing and calibrating their systems to remove bias.
While AI-based assessments do have limitations, they still have a bright future and it is critical that we continue to invest in helping them evolve to a point where they can become good psychologists.
How We Get There
To think like psychologists, AI-based hiring assessments will require AI to reach human-like levels of comprehension. This may take awhile. Evolving our hiring tools requires collaboration between humans and machines. Chess provides a great example of what is possible when humans and machines join forces. While a computer can beat any human at chess 100 percent of the time, a computer and a human playing together can beat even the most advanced chess computer.
AIs do not need higher-level cognition to help us hire better. There are currently many situations where AI is partnering with psychologists to help them do their jobs better.
When it comes to hiring, assessments creating synergies with AI can definitely help us sharpen our ability to predict job success via an understanding of individual differences. The future requires us to:
- Set realistic expectations. Beware of silver bullets claiming instant results. Evolving AI requires a long-term focus. It took a hundred years of dedication to get where we are now.
- View data as constructs, not just patterns. Ensure psychometrically sound measures of the constructs are a part of the equation. Don’t rely on patterns of data alone to predict job success. Instead use AI to help build the psychometric models required to truly understand individual differences.
- Break it down! Use experts (psychologists) to break jobs down into the constructs that are required for success. A good job analysis is an important foundation that can be used to help AIs understand what should be measured in predictive equations.
- Collaborate. Doing this right requires multidisciplinary teams that include equal parts psychology and data science. Success will require both psychologists and computers scientists to understand and include one other’s perspective
Until machines can think like humans, AI-based hiring assessments will fall far short of their ultimate potential while exposing users to increased risk. But there is a lot to be gained by using AI to help psychologists to better understand individual differences. So when creating the hire bots of tomorrow, don’t forget to include good old-fashioned psychology into the mix.