Excitement about ChatGPT has been swirling in the tech space since right before the winter holidays. Its public release, along with wide open access to anyone, has helped deliver some honest-to-goodness innovation hype.
GPT-3 (short for “Generative Pre-trained Transformer 3”), developed by OpenAI, is designed to generate human-ish responses via text and can be fine-tuned for a variety of applications. It is also able to understand and respond to what people say in a more human-like way. For commercial users, it can incorporate data from their own sources to deliver responses that are less chatty and more informational.
It’s incredibly good. And that’s from someone who was a buddy with SmarterChild back in the AOL Instant Messenger days!
While conversational AI and chatbots have been around for many years and have been used in a variety of applications, does ChatGPT mean that nobody needs to worry about developing another solution? Are chatbots and conversational AI providers in trouble?
Conversational AI for Recruiting Ain’t New
It’s hard to believe, but two-and-a-half years ago, Madeline Laurano of Aptitude Research released the most comprehensive report I’ve read on conversational AI in recruiting. At that point, 38% of companies were using or planning to use conversational AI solutions in 2020, compared to just 7% in 2019. Of those that had invested in the tools by 2020, satisfaction was extremely high and candidates also generally had positive experiences. My guess is that these high marks are only better today.
Providers of these tools have mostly moved on from the programmed chatbots of the past. Chatbots often rely on predefined scripts and rules, which can make them feel robotic and impersonal. Anything other than the most basic (and expected) request can crash them.
While leaders in conversational AI, like Paradox, have had the funding to do deep development of their own natural-language-processing (NLP) and machine-learning (ML) models, newer companies had to rely heavily on pre-trained models like the various BERTs (pre-trained language representations) or older versions of GPT.
The Advantages of GPT-3
The gap between GPT-3 and BERT (or really anything else) is enormous. The pre-training done on GPT-3 is orders of magnitude larger than anything that came before it. That means that recruiting technology providers can spend more time integrating data sources and ensuring they are working properly, and less time training a model on the difference between a phone screen and an in-person interview.
Out of the box, GPT-3 can understand different languages and contextual requests incredibly well. It remembers what you say, and it integrates that knowledge into the conversation. The ability to understand oddball requests or ask for additional clarification gives the platform a level of conversation that’s human to a certain extent.
Conversational AI providers aren’t in any immediate danger, though. With NLP and ML talent impossibly expensive, an explosion of new tools isn’t coming tomorrow. Even with ChatGPT and its extensive abilities out of the box, productizing it, integrating it with your system, and ensuring it runs smoothly in a production environment isn’t child’s play. If you’re a buyer in this space and you run into a provider promising conversational AI coming to their platform this year, proceed with caution.
That said, it lowers the bar for a minimum viable product in this space considerably. With the heavy lifting on the conversational side of things taken care of, it’s only natural to figure out ways of using these tools to your advantage. I would expect more GPT-powered tools in recruiting in the coming few years.
The Disadvantages of GPT-3
GPT-3 is still a fairly new model (first released in beta in mid-2020), and ChatGPT was just released late last year. Because its model is so massive, we also don’t know all of the possible issues. ChatGPT does have human moderators and protections, though, so it doesn’t give “problematic” answers or turn into a Nazi, much like another chatbot.
ChatGPT can also be used to give recommendations or even make decisions, but for organizations looking to stay compliant with New York City’s incoming rules on AI, you’ll have to take it slow. Given that it was trained on human-produced content, including a crawl of most of the open web, you could encounter possible DEI challenges if they aren’t corrected for by OpenAI’s team.
I also wouldn’t be a good writer if I didn’t tell you that once you understand how the stock ChatGPT responds, it’s pretty easy to suss it out. You’ll get word-for-word sentences repeated back to you, even if you ask a question in a different way. Not only should organizations be disclosing that their candidates are chatting with a technology that isn’t a human being, using ChatGPT becomes less differentiating as more people use it.
The Bots Are Here
As for recruiters, if there is any consolation, your job is as safe as ever. The leap forward in ChatGPT is enormous, but practical solutions that actually get rid of headcount will take time.
For the use case of conversational AI, where this technology is already used to enhance the candidate experience, ChatGPT will be somewhere between evolutionary and revolutionary. Being able to focus development on the world around the actual conversation and novel implementations will be exciting to see.
Similar to the excitement of customer-service chatbots that can actually be helpful instead of being forced to write “HUMAN HUMAN HUMAN” to get to a real person, there is a lot of potential. Still, most of the excitement seems further in the distance. And that’s probably a good thing as early adopters pressure-test and push ChatGPT to see what’s pragmatically possible.