|
Before generative AI, if you wanted an inexpensive way to build out lots of content, you launched a wiki. Youd spin up a sitebroad or nicheand throw the doors open for anyone to edit. Be an early mover (like Wikipedia) or cultivate a loyal community (think any popular sci-fi or fantasy show), and before long, youd have a vast trove of strategically useful pages. The catch with wikis is that when you hand the reins to the crowd, keeping quality consistent becomes a serious challenge. In Wikipedias case, thats meant taking stewardship to heart, relying on a small army of editorsmostly volunteersto manage millions of community-driven pages. Given how many people lean on Wikipedia daily, those editors wield a remarkable amount of influence. Recently, they reminded everyone just how much, vigorously pushing back on an internal experiment to add machine-generated summaries at the top of some articles, which was first reported by 404 Media. The backlash was swift enough that Wikipedia pulled the plug on the pilot just a day after it launched. Its a textbook example of how not to roll out AI to a devoted and discerning editorial team. {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/mediacopilot-logo-ss.png","headline":"Media CoPilot","description":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com","substackDomain":"https:\/\/mediacopilot.substack.com\/","colorTheme":"blue","redirectUrl":""}} The revolt against the machines Whats fascinating about this particular stumble is that the AI didn’t actually get anything egregiously wrong. If you take a cursory look at this AI summary about dopamine, it appears to break down what it is quite well, and certainly in a less-dense manner than the page’s introduction. There are no outright hallucinations, like a made-up scientific paper or a recommendation to add dopamine to your pizza. No, what triggered the reversal wasnt faulty outputbut an outright revolt from within. Wikipedias editorial process runs on a kind of radical openness: Reasons for edits and objections are usually aired in public. A peek at the discussion page for the test reveals a massive backlasha pile-on that would make even a Twitter outrage mob blush. Sure, the scale of the reaction might feel over the top, but the instinct behind it is easy to grasp. For people whose livelihoods revolve around words, AI’s creeping into their turf feels like an existential threat. That goes double for Wikipedias editors, who are notorious for debating even single syllables (just look at this rhetorical battle over the usage of aluminum vs. aluminium). That said, it’s not as if the complaints of the Wikipedia editors were purely histrionic. Some of them pointed out the dopamine summary included phrasing that doesn’t align with Wikipedia style, using pronouns like “we” when the site broadly adheres to a more arm’s-length objective style. And a few words in the summary, like “emotion,” appear to be inferred by the AI rather than based on a strict summary of the facts. Those are all worth addressing, but remember: This was a test. Wikipedia appears to have been very considered about the AI technology used, choosing an open-source Cohere model to maintain a high level of customization and control. It would have likely been straightforward to take the feedback from editors, use it to iterate on the prompting and tuning, then produce better summaries in Wikipedia style. That obviously didn’t happen. Wikipedia’s editors reacted swiftly and harshly, and it’s fair to say the conversation was not constructive. Rather than trying to improve a product that readers were initially responding well tofor the short duration of the test, 75% of readers who clicked on the summary found it usefulthe vast majority of editors seemed hell-bent on halting the project entirely. (A typical comment: “There is no amount of process or bureaucracy that can make this bad idea good.”) Lessons for the media Versions of this same drama are playing out across media as executives hunt for AI strategies that boost the bottom line without crushing newsroom morale. In a move with strong echoes of the Wikipedia debacle, Politico‘s union recently took legal action against the company for introducing unvetted AI-generated summaries based on the newsroom’s reporting. The whole industry is tense now that AI summary tools are starting to nibble away at search traffic, and layoffslike the recent cuts at Business Insiderhave journalist unions drawing battle lines to shield jobs from automation. Yet, AI can be an invaluable asset for reporters, too. Investigations at The Associated Press, The Wall Street Journal, and other outlets have been able to tackle massive datasets with AIs help. These tools can parse dense legal filings in record time, spark ideas as a brainstorming partner, or plow through endless pitches to spotlight the ones worth your attention. For editors and product leads hoping to fold AI into their newsrooms, there are lessons to be gleaned from Wikipedias misstep. The main one: Dont force an AI rollout from the top down. Sure, this was a test, but it was not that containedthe pages targeted for summaries werent confined to any clear test area. The newsrooms getting this rightReuters, The New York Times, The Washington Postdeploy AI thoughtfully and deliberately: team by team, sometimes even user by user, doing the hard work of winning people over before introducing new experiences. Of course, user-facing content isnt the same as internal tools, but managers need to remember that journalists feel deeply invested in how their work appears. Rolling out a tool that changes what that looks like cant be as simple as: This is what we’re doing now. In journalism, how you introduce artificial intelligence is just as critical as what the AI does. Even the best system will spark resistance if its sprung without trust, transparency, and genuine respect for the craft. AI can be a powerful ally for newsroomsif its brought in with care, buy-in, and a clear sense of partnership. {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/mediacopilot-logo-ss.png","headline":"Media CoPilot","description":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for Media CoPilot. To learn more visit mediacopilot.substack.com","substackDomain":"https:\/\/mediacopilot.substack.com\/","colorTheme":"blue","redirectUrl":""}}
Category:
E-Commerce
For those whove had enough of scrolling AI slop, meet Picastro: an Instagram app for astrophotography. Picastro is a dedicated, mobile-first platform built for amateurs and pros who capture images of the night sky. Launched late last year by Tom McCrorie, an amateur astrophotographer, the platform was designed to give celestial images the space and pixels they deserveand to offer users a break from bots, algorithms, and shoddy AI-generated content. The platform supports JPEG files up to 120 megabytes, allowing for high-resolution uploads and manual zooming, so every detail can be appreciated as nature intended. For reference, Instagram currently supports up to eight megabytes before photos are automatically compressed. Uploaded images can be tagged with a StarCard, a field where photographers share key information about their shotfrom telescope type and camera model to filters used and ISO settings. Instead of relying on an algorithm or recommendation engine to decide which images get seen, users vote on their favorite photos by using a system of stars and trophies. The images with the most votes rise to the top, and each week and month the top-voted entries are highlighted as Image of the Week or Image of the Month. Theres also a social aspect. Users can connect through StarCampssubgroups within the platform based on different skills, equipment brands, celestial targets, or experience levels. The app offers a free plan, Curiosity, but full access requires a subscription. Paid plansTitan, Callisto, and Ganymederange from about $5 to $10 per month and allow for more uploads and larger file sizes. If you ever need a reminder that social media is fake and we live on a floating rock, just download the app and have a scroll.
Category:
E-Commerce
The Republican Partys 800-page One Big Beautiful Bill Act is now being debated in the Senate, with a final up-or-down vote expected as soon as Monday night. On the issue of artificial intelligence, much of the attention has focused on the bills proposed moratorium on state-level laws regulating the development or application of AI models and apps. Notably, Senate negotiations reduced the proposed moratorium from 10 years to five, and added exceptions for state rules that protect kids and copyrights, so long as the rules do not unduly or disproportionately burden AI systems and models. However, state preemption is only one of several major AI-related proposals in the bill. It appropriates billions of dollars for new AI initiatives across multiple federal agencies, including the departments of Defense, Homeland Security, Commerce, and Energy. Homeland Security The bill allocates $6.1 billion for infrastructure and systems used in border surveillance. A portion of the funding will go toward acquiring new and upgraded surveillance systems that use artificial intelligence, machine learning, or computer vision to detect, identify, classify, and track items of interest. It also directs DHS to develop new nonintrusive inspection equipment, potentially using AI, to detect illicit narcotics crossing the border. Defense For fiscal year 2025, the bill provides $450 million to develop AI and autonomous robotics systems for naval shipbuilding. It allocates $145 million for AI in aerial and naval attack drones and systems. An additional $250 million is proposed to expand AI projects within U.S. Cyber Command, and $115 million is set aside to develop AI systems that help protect nuclear facilities from cyberattacks. Another $200 million is included to improve the speed, efficiency, and cybersecurity of the systems that the Pentagon uses to audit its financial statements. Commerce The bill amends existing law to include AI systems and automated decision systems as eligible projects under the Broadband Equity, Access, and Deployment (BEAD) Program. It also adds $500 million in funding to the program for fiscal year 2025. In addition, the bill allocates $25 million to the Commerce Department for constructing, acquiring, and deploying AI infrastructure required to run AI models and systems. The bill states that any state not complying with the five-year moratorium on AI regulation will be ineligible for these funds. Public interest and tech advocacy groups have strongly criticized the provision, arguing it effectively forces states to choose between essential broadband funding and their ability to oversee AI development responsibly. Congress should abandon this attempt to stifle the efforts of state and local officials who are grappling with the implications of this rapidly developing technology, and should stop abdicating its own responsibility to protect the American people from the real harms that these systems have been shown to cause, Center for Democracy and Technology CEO Alexandra Reeve Givens said in a statement Monday. Energy The bill provides $150 million to the Energy Department to develop and share data and AI models. It instructs the agency to work with national and commercial labs to curate Department of Energy data for use in new AI models. The government believes this energy usage data can support the private sector in developing next generation microelectronics that consume less power. The Energy Department will also share its AI models with private-sector researchers to accelerate innovation in discovery science and engineering for new energy technologies.
Category:
E-Commerce
All news |
||||||||||||||||||
|