Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2024-09-26 19:20:12| Engadget

Researchers have spotted an apparent downside of smarter chatbots. Although AI models predictably become more accurate as they advance, theyre also more likely to (wrongly) answer questions beyond their capabilities rather than saying, I dont know. And the humans prompting them are more likely to take their confident hallucinations at face value, creating a trickle-down effect of confident misinformation. They are answering almost everything these days, José Hernández-Orallo, professor at the Universitat Politecnica de Valencia, Spain, told Nature. And that means more correct, but also more incorrect. Hernández-Orallo, the project lead, worked on the study with his colleagues at the Valencian Research Institute for Artificial Intelligence in Spain. The team studied three LLM families, including OpenAIs GPT series, Metas LLaMA and the open-source BLOOM. They tested early versions of each model and moved to larger, more advanced ones but not todays most advanced. For example, the team began with OpenAIs relatively primitive GPT-3 ada model and tested iterations leading up to GPT-4, which arrived in March 2023. The four-month-old GPT-4o wasnt included in the study, nor was the newer o1-preview. Id be curious if the trend still holds with the latest models. The researchers tested each model on thousands of questions about arithmetic, anagrams, geography and science. They also quizzed the AI models on their ability to transform information, such as alphabetizing a list. The team ranked their prompts by perceived difficulty. The data showed that the chatbots portion of wrong answers (instead of avoiding questions altogether) rose as the models grew. So, the AI is a bit like a professor who, as he masters more subjects, increasingly believes he has the golden answers on all of them. Further complicating things is the humans prompting the chatbots and reading their answers. The researchers tasked volunteers with rating the accuracy of the AI bots answers, and they found that they incorrectly classified inaccurate answers as being accurate surprisingly often. The range of wrong answers falsely perceived as right by the volunteers typically fell between 10 and 40 percent. Humans are not able to supervise these models, concluded Hernández-Orallo. The research team recommends AI developers begin boosting performance for easy questions and programming the chatbots to refuse to answer complex questions. We need humans to understand: I can use it in this area, and I shouldnt use it in that area, Hernández-Orallo told Nature. Its a well-intended suggestion that could make sense in an ideal world. But fat chance AI companies oblige. Chatbots that more often say I dont know would likely be perceived as less advanced or valuable, leading to less use and less money for the companies making and selling them. So, instead, we get fine-print warnings that ChatGPT can make mistakes and Gemini may display inaccurate info. That leaves it up to us to avoid believing and spreading hallucinated misinformation that could hurt ourselves or others. For accuracy, fact-check your damn chatbots answers, for crying out loud. You can read the teams full study in Nature.This article originally appeared on Engadget at https://www.engadget.com/ai/advanced-ai-chatbots-are-less-likely-to-admit-they-dont-have-all-the-answers-172012958.html?src=rss


Category: Marketing and Advertising

 

Latest from this category

16.01Lidl takes over restaurants to let chefs make the case for plant-based eating
16.01Ray's Blocked Engadget Test Article
16.01Kathleen Kennedy steps down as Lucasfilm president, marking a new era for the Star Wars franchise
16.01Senate passes minibus bill funding NASA, rejecting Trump's proposed cuts
15.01A $250 billion trade deal will see Taiwan bring more semiconductor production to the US
15.01Bluesky's 'Live Now' badge is available to everyone
15.01Amazon's New World: Aeternum MMO will go offline January 31, 2027
15.01Netflix's expanded Sony deal includes streaming rights to the Legend of Zelda movie
Marketing and Advertising »

All news

16.01Roblox launched age-verification rules. Days later, age-verified accounts were available on eBay
16.01The world is getting tougher on kids online safety in 2026
16.01Are we ready for OpenAI to put ads into ChatGPT? 
16.01Pankaj Tibrewal on AI, capital markets and where investors should focus in 2026
16.01Free America Walkout: January 20 protest against Trump presidency at one-year mark, amid anti-ICE movement. Heres what to know
16.01'We're in survival mode': The cost of milk price crisis to family dairy farms
16.01Sandip Agarwal on IT sector: Improved margins, but growth expectations need a reset
16.01Trump's Fed fight looks like something from another country
More »
Privacy policy . Copyright . Contact form .