Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2024-10-08 21:45:32| Engadget

Artificial intelligence is expected to have an impact on the upcoming US election in November. States have been trying to protect against misinformation by passing laws that require political advertisements to disclose when they have used generative AI. Twenty states now have rules on the books, and according to new research, voters have a negative reaction to seeing those disclaimers. That seems like a pretty fair response: If a politician uses generative AI to mislead voters, then voters don't appreciate that. The study was conducted by New York Universitys Center on Technology Policy and first reported by The Washington Post. The investigation had a thousand participants watch political ads from fictional candidates. Some of the ads were accompanied by a disclaimer that AI was used in the creation of the spot, while others had no disclaimer. The presence of a disclaimer was linked to viewers rating the promoted candidate as less trustworthy and less appealing. Respondents also said they would be more likely to flag or report the ads on social media when they contained disclaimers. In attack ads, participants were more likely to express negative opinions about the candidate who sponsored the spot rather than the candidate being attacked. The researchers also found that the presence of an AI disclaimer led to worse or unchanged opinions regardless of the fictional candidate's political party. The researchers tested two different disclaimers inspired by two different state requirements for AI disclosure in political ads. The text tied to Michigan's law reads: "This video has been manipulated by technical means and depicts speech or conduct that did not occur." The other disclaimer is based on Florida's law, and says: "This video was created in whole or in part with the use of generative artificial intelligence." Although the approach of Michigan's requirements is more common among state laws, study participants said they preferred seeing the broader disclaimer for any type of AI use. While these disclaimers can play a part in transparency about the presence of AI in an ad, they aren't a perfect failsafe. As many as 37 percent of the respondents said they didn't recall seeing any language about AI after viewing the ads.This article originally appeared on Engadget at https://www.engadget.com/ai/viewers-dont-trust-candidates-who-use-generative-ai-in-political-ads-study-finds-194532117.html?src=rss


Category: Marketing and Advertising

 

Latest from this category

10.01Elon Musk says X's new algorithm will be made open source next week
10.01GameStop reportedly shuts down more than 400 US stores
10.01An Instagram data breach reportedly exposed the personal info of 17.5 million users
10.01Dont count on Baldurs Gate 3 coming to Switch 2, as least for now
10.01SpaceX can deploy 7,500 more Starlink Gen2 satellites with FCC approval
10.01The CES companies hoping your brain is the next big thing in computing
10.01Engadget Podcast: Best of CES 2026 and a chat with Pebble's founder
10.01The robots we saw at CES 2026: The lovable, the creepy and the utterly confusing
Marketing and Advertising »

All news

11.01'I had no electricity for six months': American families struggle with soaring energy prices
11.01The real impact of roadworks on the country - and why they're set to get worse
11.01Budget 2026: States seek higher capex aid, GST loss compensation in pre-Budget talks with Sitharaman
11.01Ashwini Vaishnaw likely to attend US critical minerals meet amid Chinas supply-chain weaponisation
10.01Elon Musk says X's new algorithm will be made open source next week
10.01GameStop reportedly shuts down more than 400 US stores
10.01An Instagram data breach reportedly exposed the personal info of 17.5 million users
10.01Jan 10, Free Leadership Assessment Tool (25 Questions + Scoring)
More »
Privacy policy . Copyright . Contact form .