AI Chatbots’ Human Feel: Unspoken Conversation Rule
6 mins read

AI Chatbots’ Human Feel: Unspoken Conversation Rule

Earlier this year, a financial worker in Hong Kong was duped into paying US$25 million to fraudsters who had used deepfake technology to pretend to be the company’s financial manager in a video conference call. Believing the images on the screen to be his colleagues, the financier authorized the transfer of several million dollars to fraudsters posing as friends.

It’s a dramatic example, but the confused office worker was far from alone in being fooled by generative AI. This technology, which relies heavily on large language models trained on large amounts of data to learn and predict language patterns, has become increasingly widespread since the launch of ChatGPT in 2022.

How can we explain why some people interacting with generative AI chatbots are so easily convinced that they are having a conversation with one kind of person? The answer may lie in the rules of conversation themselves – and how they are deeply rooted in how we interact with the world.

Why the hype?

Generative AI systems based on language models produce text that a linguist would call ” ungrounded “: it lacks mutual understanding it is an important component of conversation between people. But when humans engage in an interaction, we tend to assume that our partner has a mind.

We are forced to anthropomorphize the various non-humans we interact with. Before the recent leaps in generative AI, this was primarily animals.

Now it’s machines. Before generative AI, our interactions with computers generally involved minimal linguistic back and forth. But with generative AI, computer systems recognize us in a way that seems real. This “real” is a big part of the technology’s success.

So how does it work?

Language and thought

IN a fresh newspaper the American researchers Kyle Mahowald, Anna Ivanova and colleagues argue that we confuse language and thought. Because generative AI systems can use language somewhat fluently, we believe they can think. (For the record, they can’t.)

The authors distinguish between formal linguistic competence (knowledge of rules and statistical patterns in language) and functional linguistic competence (how to use language in real-life situations).

It is not enough to produce rule-governed utterances. They must also be used appropriately in context. In this example (created using ChatGPT-4o when prompted to produce an example of a correct but inappropriate conversational response), Alex offers a response that, while factually correct, fails to address their friend’s need for support .

Friend: I’m afraid of my presentation tomorrow.

Alex: Stress activates your body’s fight-or-flight response, which increases heart rate and cortisol levels.

Mahowald and colleagues say that our faith in the intelligence of generative AI systems comes from their capacity for language. A crucial piece of the puzzle, however, is what happens to humans when we interact with technology.

The conversation rules

The key to understanding the allure of generative AI chatbots for humans lies in the genre the bots perform: conversation. The conversation is governed by rules and routines.

Conversation routines vary between different cultures and different expectations are in place. At least in Western cultures, linguists often regard conversation as proceeding according to four principles or “maxims” was set up in 1975 by the British philosopher of language Paul Grice.

The maximum of quality: be truthful; do not provide information that is false or not supported by evidence.

The quantity maxim: be as informative as required; don’t give too much or too little information.

The maximum of relevance: Provide only information relevant to the topic under discussion.

The method’s maximum: be clear, concise and orderly; avoid ambiguity and obscurity.

Finding relevance at all costs

Generative AI chatbots usually do well when it comes to quantity (sometimes failing to provide too much information), and they tend to be relevant and clear (one reason people use them to improve their writing).

But they often fall short of the maxim of quality. They tend to hallucinate and give answers that may seem authoritative but are actually false.

However, the crux of the success of generative AI lies in Grice’s assertion that anyone engaged in meaningful communication will follow these maxims and will assume that others also follow them.

For example, the reason lying works is that people interacting with a liar will assume the other person is telling the truth. People interacting with someone who makes an irrelevant comment will try to find relevance at all costs.

Grice cooperative principle believes that the conversation is underpinned by our overall desire to understand each other.

Willingness to cooperate

The success of generative AI thus depends in part on the human need to collaborate in conversation and to be instinctively drawn to interaction. This way of interacting through conversation, learned in childhood, becomes habitual.

Grice argued that “it would take a great deal of effort to make a radical departure from habit”.

Next time you use generative AI, do it with caution. Remember, it’s just a language model. Don’t let your usual need for conversational collaboration accept a machine as a fellow human.

The conversation

Celeste Rodriguez Louro does not work for, consult with, own stock in, or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

/Courtesy of The Conversation. This material from the originating organization/authors may be dated and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all opinions, positions and conclusions expressed herein are solely those of the author(s).