Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2024-10-08 21:45:32| Engadget

Artificial intelligence is expected to have an impact on the upcoming US election in November. States have been trying to protect against misinformation by passing laws that require political advertisements to disclose when they have used generative AI. Twenty states now have rules on the books, and according to new research, voters have a negative reaction to seeing those disclaimers. That seems like a pretty fair response: If a politician uses generative AI to mislead voters, then voters don't appreciate that. The study was conducted by New York Universitys Center on Technology Policy and first reported by The Washington Post. The investigation had a thousand participants watch political ads from fictional candidates. Some of the ads were accompanied by a disclaimer that AI was used in the creation of the spot, while others had no disclaimer. The presence of a disclaimer was linked to viewers rating the promoted candidate as less trustworthy and less appealing. Respondents also said they would be more likely to flag or report the ads on social media when they contained disclaimers. In attack ads, participants were more likely to express negative opinions about the candidate who sponsored the spot rather than the candidate being attacked. The researchers also found that the presence of an AI disclaimer led to worse or unchanged opinions regardless of the fictional candidate's political party. The researchers tested two different disclaimers inspired by two different state requirements for AI disclosure in political ads. The text tied to Michigan's law reads: "This video has been manipulated by technical means and depicts speech or conduct that did not occur." The other disclaimer is based on Florida's law, and says: "This video was created in whole or in part with the use of generative artificial intelligence." Although the approach of Michigan's requirements is more common among state laws, study participants said they preferred seeing the broader disclaimer for any type of AI use. While these disclaimers can play a part in transparency about the presence of AI in an ad, they aren't a perfect failsafe. As many as 37 percent of the respondents said they didn't recall seeing any language about AI after viewing the ads.This article originally appeared on Engadget at https://www.engadget.com/ai/viewers-dont-trust-candidates-who-use-generative-ai-in-political-ads-study-finds-194532117.html?src=rss


Category: Marketing and Advertising

 

Latest from this category

24.01Report reveals that OpenAI's GPT-5.2 model cites Grokipedia
24.01Google says it's working to fix Gmail issue that's led to flooded inboxes and increased spam warnings
24.01US Congress members call for 'thorough review' of EA's $55 billion sale
24.01NTSB will investigate why Waymo's robotaxis are illegally passing school buses
24.01How to use Google Photos' new Me Meme feature
24.01How to use Workout Buddy with Apple Watch and iOS 26
24.01Engadget review recap: Valerion VisionMaster Max, Canon EOS R6 III and Samsung Bespoke Fridge
24.01More Cult of the Lamb, a World War II computer mystery and other new indie games worth checking out
Marketing and Advertising »

All news

25.01Fix Your Shit: Blue Diamond almonds
25.01As more homes have community associations, monthly fees can make or break an owners living situation
25.01Why some adults thrive after childhood adversity
25.01How coal mine waste could power Americas next clean energy movement
25.01Unwinding with screens may be making us more stressed. Try this instead
25.01How Trumps Greenland ambitions could destroy the modern world order
25.01Q3 earnings, Fed rate decision, Budget to steer Dalal Street this week
25.01Stagnating on the job? Try these strategies
More »
Privacy policy . Copyright . Contact form .