Advertisement

Why the Entire Selection Process is Losing Its Signal (and How to Fix It)

How generative AI is eroding the reliability of resumes, assessments, and interviews, and what talent acquisition leaders can do to restore signal in the hiring process.

Article main image
Mar 23, 2026

For decades, the rigorous, multi-stage selection funnel was the “gold standard” of Talent Acquisition – a series of filters designed to isolate the true “signals” of candidate quality from the noise. We relied on the resume to screen for experience, standardized assessments to measure cognitive ability and personality, and structured interviews to verify both technical competence and job and organizational fit.

But those signals are fading in the age of generative AI (GenAI).

As GenAI becomes a standard desktop tool, candidates are no longer just using it to polish a sentence; they are using it to “co-pilot” their way through every stage of your funnel. Our research strongly suggests that the incidence of AI use is rising sharply, and it is fundamentally changing the way assessment works.

The adoption of GenAI tools has been explosive – ChatGPT’s weekly active users quadrupled from roughly 200 million to 800 million between the data collection points of our two studies. That surge in general use is spilling over into hiring. Our data shows that in late 2024, fewer than 3% of applicants reported using GenAI for assessments; by late 2025, that number jumped to nearly 19%. This is an urgent reality for every talent acquisition professional.

AI growth

The “Total Funnel” Erosion

  1. Resumes: The Loss of “Effort” Signals
    Traditionally, a well-structured resume signaled conscientiousness and detail-orientation. Our research found that these compositional qualities still drive hiring decisions and by a surprising margin. In a study tracking 183 job applicants through a competitive hiring process, we found that a one-point improvement in the writing quality of both resumes and cover letters was associated with 7% more interviews (relative to applications submitted) and a nearly 10-day faster time-to-hire. Crucially, these effects remained strong after controlling for applicants’ actual work experience, achievements, and GPA. In other words, writing quality predicted hiring success independently of the substantive qualifications it’s supposed to reflect. However, these data were collected in 2024 and GenAI has improved so much that it can now manufacture these signals instantly. A perfectly structured resume no longer guarantees a highly organized human; it only guarantees a user who knows how to prompt an LLM. What once signaled effort has become easy, making it harder for recruiters to distinguish between intrinsic candidate quality and AI-enabled presentation. We have heard countless anecdotes from hiring managers who are seeing larger and larger discrepancies between how candidates are presenting themselves on their resumes versus how they are performing in interviews.
  2. The Quantitative Collapse: From GPT-4 to o1
    The defense that “AI isn’t good at math or logic” has evaporated. Recent benchmarking shows a staggering leap: while GPT-4 scored below the 20th percentile on quantitative ability (number series) tests, newer reasoning models like OpenAI’s o1 scored at the 95th percentile. ChatGPT and other GenAI tools have improved markedly since o1 was rolled out! Unproctored cognitive testing for high-stakes roles is essentially broken unless fundamentally redesigned.
  3. AVIs and AI-Scored Interviews: Faking the Algorithm
    Even when organizations move to AI-scored asynchronous video interviews (AVIs) to reduce human bias, the signal is under threat. On one hand, research shows that AI-scoring algorithms trained on structured human ratings are no more susceptible to traditional impression management than structured human interviews, and both are far more resistant to faking than self-report personality scales. That’s the good news. The bad news: when candidates use GenAI to script their responses, the picture changes dramatically. In an experiment comparing ChatGPT-assisted and unassisted AVI performance, candidates who used AI-generated responses scored vastly higher on overall interview performance – an effect driven entirely by content quality, not delivery. Even candidates who read ChatGPT’s output word-for-word performed just as well as those who personalized it. In short, AI-powered screening tools may resist old-fashioned faking, but they face a new adversary: candidates armed with the same technology.
  4. Personality Assessments: Algorithmic Perfection
    We investigated whether LLMs could inflate personality scores more effectively than humans. Our findings show that advanced models can “hack” assessments to produce ideal profiles for specific jobs. These tools are often as good, if not better, at matching the preferred personality traits than the most savvy human “fakers.”

The picture is clear: GenAI is degrading the signal at every stage of the funnel. But the solution is not to abandon these tools – it’s to adapt them. Here are six practical strategies, grounded in the latest research, to rebuild the integrity of your selection process.

Strategies to Restore the Signal

  1. Use “Honesty Agreements” and Strategic Warnings
    Our research found that explicit warnings and honesty agreements have a deterrent effect. Framing the assessment as a tool for finding a role where the candidate will genuinely thrive is more effective than a purely punitive warning.
  2. Use “Fake-Resistant” Measures
    Traditional “Rate 1-5” personality questions are sitting ducks. Our studies indicate that phrase-based forced-choice formats—where a candidate must choose between two equally positive traits—are significantly more resistant to AI manipulation.
  3. Implement Layered Monitoring: The Friction-to-Surveillance Spectrum
    The vendor landscape for monitoring has exploded in sophistication and produced several promising tools to mitigate cheating. For example:

    • Basic Safeguard: Disabling copy-paste and right-click functions to prevent instant porting of questions.
    • Middle Ground: Passive Monitoring (Trace Data). Tracking behavioral markers like unusual latencies, tab-switching, or “window-blurring.”
    • High-Intensity: Lockdown browsers and live human proctoring.

At the same time, we need to be aware that new strategies introduce new risks. Systems that rely on trace data, such as tracking how applicants move their mouse or how fast they type, can lead to bias. Research into AI fairness in hiring warns that these features may inadvertently penalize neurodivergent candidates or those with different levels of technical literacy, potentially increasing adverse impact against protected groups.

  1. Consider Employing Synchronous Work Samples
    Use synchronous “Edit” sessions for final-round candidates. If you’re hiring a coder, ask them to fix a flawed piece of code or a spreadsheet while sharing their screen and explaining their thought process. This moves the evaluation from “output” (which AI can do) to “process” (which the human must do).
  2. Interview for Process and Verifiable Facts
    Instead of asking “What would you do?”-style hypothetical questions that can be gamed by AI, ask about verifiable past experiences, and require applicants to “bring the receipts.” Applicants should be able to provide direct evidence of their experiences or contact information for referees who can attest to their past successes.When applicants provide a polished textbook behavioral answer, don’t just accept it. Research shows these can be easily scripted by AI to outscore human originals. Ask dynamic probes that require applicants to demonstrate real-world cognitive agility and shift from assessing the “story” to assessing the “thinking.” For example, “You mentioned the team was resistant to that change – what was the specific technical objection raised, and why did you decide your counter-argument was more valid than theirs in that moment?” While an AI could generate a hypothetical answer to this, a candidate relying on a live script will struggle with the latency and consistency required to keep the ruse going in a fast-moving, synchronous environment.Finally, if possible, conduct interviews in person. This doesn’t just make it harder for candidates to cheat using AI. Research suggests that candidates are more likely to be honest if interviewers take the time to make a personal connection and treat them with the respect only humans can give.
  3. Assessing “Prompting” as a Skill
    For roles where AI is a standard tool, assess prompt engineering ability. Evaluate how effectively the candidate uses the tool to reach a superior outcome, rather than penalizing the tool’s use.

Defending the Signal

The “AI-proof” selection process is a myth. Protecting the integrity of the hire requires multiple, layered, and constantly updated defenses. In high-volume selection, you cannot eliminate AI use, but you can build a funnel that prioritizes verifiable signals. By implementing these layered defenses, you ensure that the talent you hire isn’t just a clever echo of an algorithm, but the actual human capability your organization needs.

Get articles like this
in your inbox
The longest running and most trusted source of information serving talent acquisition professionals.
Advertisement