Has AI become too human? Time for a reality check on LLMs

Has AI become too human

While everyone was still processing the arrival of ChatGPT, the reveal of a more advanced large language model (LLM) in GPT-4 was a thunderbolt. As the speed of artificial intelligence (AI) development accelerates, we’ll all have to get used to these kinds of surprises.

LLMs continue to shock and impress us. New business opportunities and use cases are revealed all the time. Generative AI models are now used for everything from writing code to composing customer emails. But they also raise important questions with big ramifications for our society. How close are these models getting to human cognitive ability? And what’s the role of us humans as LLMs become pervasive across the enterprise?

Cognitive AI: a history of moving the goalposts

It seems that every time AI beats the tests we set for it, we change our definition for what makes us human. The ability to create and understand language was long held up as a distinctly human achievement. But the SuperGLUE benchmark shows AI models consistently beating humans in language understanding and reading comprehension. ChatGPT now generates convincing human language consistently. But few would argue that ChatGPT is ‘human’.

It could be argued that the human brain has its own LLM, centered in the temporal lobe. Wernicke’s area is a primary language center tasked with making sense of written and spoken language. However, just because we know where to look doesn’t mean we know how it works. The human mind remains a mystery. We barely grasp the fundamentals of how we process information. So, we can scarcely even guess how close our machine learning models are to human cognition.

Regardless, LLMs like ChatGPT have pulled us all into a new business and technology paradigm. Machines can now perform tasks that once depended on human creativity and understanding. The scariest and most exciting thing is that this is just the beginning. Who knows what AI will be able to do in two years' time.

So where does that leave us?

Watch your language: the limitations of LLMs

Can LLMs think like we do? It’s an open question. For one thing, LLMs arguably can reason and think critically about their outputs. In fact, asking ChatGPT to explain its reasoning for a response often induces it to perform better.

The root of the problem is that we can’t confidently say what sentience is. Nor can we measure it objectively. ‘What is human consciousness’ is a question for philosophers as much as it is for neuroscientists. Which is another way to say we’ll probably never know the answer.

One thing is clear, though. Having general AI that can understand and perform any task isn’t a reality just yet. That doesn’t mean LLMs aren’t useful to businesses, but we should be wise to their limitations.

LLMs are machine learning algorithms trained for a particular task: outputting relevant human-sounding language in response to user prompts. That’s no small task, and its success has made it the first truly general-purpose AI technology. But LLMs are limited to this use case without using external plug-ins. They can only do what they are trained to do and won’t pick up new skills or tasks independently.

Training data also limits the potential of LLMs. LLMs are trained on massive, generic datasets largely scraped from the internet. Training LLMs is expensive and energy intensive, meaning their creation is limited to a small number of big technology players. The most popular LLMs are free, open, third-party solutions that can’t guarantee data privacy.

Furthermore, while the training datasets are huge, they’re much wider than they are deep. Ask ChatGPT about a niche or highly specialized topic and it’ll struggle to give you a satisfying answer. And when LLMs don’t know the answer, they have a habit of making things up! For all the improvements introduced by GPT-4, OpenAI still cautions that the model is “still is not fully reliable”. It can generate strange, erroneous, or even offensive outputs. Such ‘hallucinations’ as they are called, can be hard to spot given how convincing LLMs are at imitating natural language.

An automation assistant: extracting business value from LLMs

Despite their shortcomings, you can’t doubt the value of LLMs for businesses. Every day, more than 13 million people use ChatGPT. Many are using it to support attended automations for tasks like summarizing a long document or composing a customer email. By 'attended', I mean tasks that are completed by AI or an automation system while under human supervision. It’s crucial to have a human in the loop to check and correct LLM outputs before they are actioned.

It’s these attended automations where LLMs generate the most value. Humans are in the driver’s seat even when using ChatGPT to do their work. Asking the right questions and phrasing the best prompts are now invaluable skills in a workforce. Employees should also be educated on how to use freely available LLMs safely and be aware of the data privacy risks.

Ultimately, businesses need humans to create automations as much as they need AI and software robots to carry them out. The future of LLMs lies in applications like Clipboard AI and Project “Wingman”, where employees leverage AI to drive automations and business success.

Technology leaders are also hard at work making LLM technology flexible, customizable, and business specific. UiPath Communications Mining gives businesses access to powerful LLMs they can customize to their specific needs. All while keeping a human in the loop to review and correct predictions that LLMs aren’t sure of.

Communications Mining is trained using a business’ personal data. Unsupervised learning allows Communications Mining to accurately extract important data (like customer numbers, dates, and addresses) as well as emotions from business communications. But it’s the rapid active learning process where employees teach machine learning models the specifics of their business, like customer intents and reasons for contact.

In this way, businesses can create custom, accurate models that are fine-tuned to their exact needs and unique business context. These models help companies understand their business and service processes, and create the data needed for end-to-end automation.

AI models can make mistakes, but so do humans. Businesses are cheating themselves if they don’t take steps to make LLMs a part of their workforce. An automation aid for every employee. A force multiplier for our cognitive abilities.

For more insights into AI development and business applications, watch the on-demand session recordings from UiPath AI Summit.

Editor's note: the views represented in this article are the author’s own and are not necessarily representative of UiPath.

Gabriel Barello UiPath
Gabriel Barello

Researcher, UiPath

Get articles from automation experts in your inbox

Subscribe
Get articles from automation experts in your inbox

Sign up today and we'll email you the newest articles every week.

Thank you for subscribing!

Thank you for subscribing! Each week, we'll send the best automation blog posts straight to your inbox.