Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2025-04-04 09:00:00| Fast Company

The nonstop cavalcade of announcements in the AI world has created a kind of reality distortion field. There is so much buzz, and even more money, circulating in the industry that it feels almost sacrilegious to doubt that AI will make good on its promises to change the world. Deep research can do 1% of all knowledge work! Soon the internet will be designed for agents! Infinite Ghibli! And then you remember AI screws things up. All. The. Time. Hallucinationswhen a large language model essentially spits out information created out of whole clothhave been an issue for generative AI since its inception. And they are doggedly persistent: Despite advances in model size and sophistication, serious errors still occur, even in so-called advanced reasoning or thinking models. Hallucinations appear to be inherent to generative technology, a by-product of AI’s seemingly magical quality of creating new content out of thin air. They’re both a feature and a bug at the same time. {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/mediacopilot-logo-ss.png","headline":"Media CoPilot","description":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com","substackDomain":"https:\/\/mediacopilot.substack.com\/","colorTheme":"blue","redirectUrl":""}} In journalism, accuracy isn’t optionaland thats exactly where AI stumbles. Just ask Bloomberg, which has already hit turbulence with its AI-generated summaries. The outlet began publishing AI-generated bullet points for some news stories back in January this year, and it’s already had to correct more than 30 of them, according to The New York Times. The intern that just doesn’t get it AI is occasionally described as an incredibly productive intern, since it knows pretty much everything and has superhuman ability to create content. But if you had to issue 30-plus corrections for an intern’s work in three months, you’d probably tell that intern to start looking at a different career path. Bloomberg is hardly the first publication to run head-first into hallucinations. But the fact that the problem is still happening, more than two years after ChatGPT debuted, pinpoints a primary tension when AI is applied to media: To create novel audience experiences at scale, you need to let the generative technology create content on the fly. But because AI often gets things wrong, you also need to check its output with “humans in the loop.” You can’t do both.� The typical approach thus far is to slap a disclaimer onto the content. The Washington Posts Ask the Post AI is a good example, warning users that the feature is an “experiment” and encouraging users to “Please verify by consulting the provided articles.” Many other publications have similar disclaimers. It’s a strange world where a media company introduces a new feature with a label that effectively says, “You can’t rely on this.” Providing accurate information isn’t a secondary feature of journalismit’s the whole point. This contradiction is one of the strangest manifestations of the application of AI in media. Moving to a close enough world How did this happen? Arguably, media companies were forced into it. When ChatGPT and other large language models first began summarizing content, we were so blown away by their mastery of language that we weren’t as concerned about the fine print: “ChatGPT can make mistakes. Check important info.” And it turns out that for most users that was good enough. Even though generative AI often gets facts wrong, chatbots have seen explosive user growth. “Close enough” appears to be what the world is settling on.� It’s not a standard anyone sought out, but the media is slowly adopting it as more publications launch generative experiences with similar disclaimers. There’s an “If you can’t beat ’em, join ’em” aspect to this, certainly: As more people turn to AI search engines and chatbots for information, media companies feel pressure to either sign licensing deals to have their content included, or match those AI experiences with their own chatbots. Accuracy? Theres a disclaimer for that.� One notable holdout, however, is the BBC. So far, the BBC hasn’t signed any deals with AI companies, and it’s been a leader in pointing out the inaccuracies that AI portals create, publishing its own research on the topic earlier this year. It was also the BBC that ultimately convinced Apple to dial back its shoddy notification summaries on the iPhone, which were garbling news to the point of making up entirely false narratives. In a world where it’s looking increasingly fashionable for media companies to take licensing money, the BBC is architecting a more proactive approach. Somewhere along the waywhether out of financial self-interest or falling into Big Tech’s reality distortion fieldmany media companies began to buy into the idea that hallucinations were either not that big a problem or something that will inevitably be solved. After all, “Today is the worst this technology will ever be.” Think of pollution and coal plants. Its an ugly side effect, but one that doesnt stop the business from thriving. Thats how hallucinations function in AI: clearly flawed, occasionally harmful, yet toleratedbecause the growth and money keep coming. But those false outputs are deadly to an industry whose primary product is accurate information. Journalists should not sit back and expect Silicon Valley to simply solve hallucinations on its own, and theBBC is showing there’s a path to being part of the solution without evangelizing or ignoring the problem. After all, “Check important info” is supposed to be the media’s job. {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/mediacopilot-logo-ss.png","headline":"Media CoPilot","description":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com","substackDomain":"https:\/\/mediacopilot.substack.com\/","colorTheme":"blue","redirectUrl":""}}


Category: E-Commerce

 

Latest from this category

21.04The Lyrid meteor shower peaks tonight: Heres the best time to catch a glimpse of April 2025s spectacular show
21.04Grubhub nearly killed off Seamless years ago. Its NYC-based owners are reviving it�
21.04Countries ramp up defenses against cyberattacks amid global tensions
21.04Coal miners lose safety nets as black lung programs collapse under Trump
21.04FTC sues Uber over misleading Uber One subscribers
21.04The Ice Bucket Challenge is back, this time with a focus on mental health
21.04USDA alert: Pork product sold at Aldi may contain metal. Check your freezer for these carnitas right now
21.04Housing market shift: 60 major markets are now seeing falling home prices
E-Commerce »

All news

22.04Fyre Festival is becoming a music streaming service that might not be a scam this time
22.04Retro handheld maker Anbernic stops US shipments due to tariffs
21.04Door-to-door physical therapist celebrates opening first clinic in La Grange
21.04Stocks Falling Substantially into Final Hour on China's Trade War Escalation Moves, Increasing Fed Chairman Uncertainty, Rising Long-Term Rates, Tech/Healthcare Provider Sector Weakness
21.04Using generative AI will 'neither help nor harm the chances of achieving' Oscar nominations
21.04What Makes This Trade Great: SPY Short with Trade Wave
21.04The Lyrid meteor shower peaks tonight: Heres the best time to catch a glimpse of April 2025s spectacular show
21.04ACLU sues two federal agencies for transparency around DOGE activity
More »
Privacy policy . Copyright . Contact form .