MIT is creating a tool to verify AI responses and check for inaccuracies
2 mins read

MIT is creating a tool to verify AI responses and check for inaccuracies

The Massachusetts Institute of Technology (MIT) has come up with a system to help check whether artificial intelligence is “hallucinating” – industry jargon for when chatbots throw out incorrect or bizarre responses to user queries.

To help human fact-checkers who must plow through multiple long and complex documents that bots sometimes use to produce answers, MIT researchers and engineers created “a user-friendly system” that they say “enables humans to verify an LLM (Large Language Model) answers much faster.”

Called SymGen, the tool is needed because the current “validation processes” can be “so burdensome and error-prone” that they could stop some people “from deploying generative AI models in the first place,” according to the researchers.

SymGen prompts an LLM-based chatbot to respond “with citations that point directly to the location in a source document, such as a given cell in a database,” MIT said. The system is said to reduce verification time by around 20%, potentially overcoming some of the more daunting drawbacks of using AI chatbots.

“SymGen can give people higher confidence in a model’s answer because they can easily take a closer look to make sure the information is verified,” said MIT’s Shannon Shen.

The team admitted that the tool is currently limited to checking only tables or other structured data sources.

It’s further generally limited “by the quality of the source data” used to train the AI ​​— meaning that if a bot “cites an incorrect variable,” the person cross-checking “might be none the wiser.”

AI bots are mostly trained using reams of information gleaned from the internet – a fast and loose process fraught with alleged rights violations in areas such as copyright, disclosure and privacy.

The New York Times and News Corp have announced separate legal challenges to Perplexity AI’s alleged use of their articles for so-called education. LinkedIn, which has been emailing users encouraging them to use AI to edit profile pages, was recently caught secretly scraping account holder data for its AI, before enabling an opt-out.