|
|||||
Brands love to insert themselves into cultural conversations or piggyback on buzzy current events, a strategy sometimes called newsjacking. But it can happen without seeking, or even wanting, the attention. The borderline absurd virality of a Nike tracksuit evidently worn by Venezuelan President Nicolás Maduro as he was taken into the custody of American captors is the most high-profile recent examplebut it definitely wont be the last. This form of what we could call involuntary product placement can be a conundrum for brands, which prefer to be associated with upbeat or positive events, not dictators or controversial geopolitics. And thats been made even more challenging by a starkly divided political climate that has put brands from Bud Light to Tesla to Hilton in the crossfire, and a hypercharged social media environment that constantly hungers for new angles, riffs, and takes on whatever is hogging the spotlight. Of course, involuntary product placement isnt new: If you remember the car chase climaxing in O.J. Simpsons arrest, you know he was driving a Ford Bronco. Yet unsolicited pop-culture brand cameos arent always bad. Ocean Spray, for instance, enjoyed a boost after it accidentally had a starring role in a feel-good viral clip of a skateboarder sipping the drink as Fleetwood Macs Dreams played. And in a marketing-soaked world, plenty of accidental brand appearances scarcely register. @420doggface208 Dreams (2004 Remaster) – Fleetwood Mac But that same ubiquity is part of what makes brands such handy and ultimately irresistible signifiers for people to latch on to and exploitespecially now, when they pop up in full-on news spectacles amplified by social media. Spawning instant and endless memes (and, increasingly, AI fakery), these events soak up and repurpose all the relevant cultural material they can, brands very much included. When a healthcare executive was gunned down in Manhattan in 2024, for example, coverage of the subsequent manhunt included plenty of online scrutiny of his jacket, backpack, and other gear. Since Luigi Mangione was arrested on murder charges for the crime, brand sleuths have continued to obsess over his courtroom style choices, snapping up items like a merino sweater from Nordstrom he wore to his arraignment. Luigi Mangione arrives at Manhattan Criminal Court on December 23, 2024, wearing a sweater from Nordstrom. [Photo: Spencer Platt/Getty Images] The Maduro tracksuit has brought all this to a new level, attracting attention for how much attention it was attracting. Searches for Nike Tech spiked, and styles and colorways similar to the jacket and pants Maduro wore were selling out; some reviews on the brands site seemed to wink at the whole scenario. (Viva Venezuela!!) There was something disconcerting about the presence of a globally recognizable brand in a moment typically governed by the visual codes of state power, design writer and educator Debbie Millman observed. Athleisure replaced uniform; a logo supplanted insignia. The specific tracksuit has its own cultural significance, a New York Times style assessment on the matter reported, and has lately served as a uniform of sorts for some rappers and athletes (and their fans). Less seriously, of course, the juxtaposition of a detained head of state and Nike gear was fodder for a slew of ironic meme humora steal his look parody; the mock slogan For the gym. For errands. For federal custody, and so on. A brand caught up in an involuntary product placement moment certainly doesnt want to be seen as celebrating the attention. But really any kind of acknowledgment can be fraught. When the healthcare executives killer was still at large, the CEO of Peak Design recognized the shooters backpack as one made by his company, reached out to law enforcementand ended up being threatened by customers who evidently wanted the fugitive to escape. As for Nike and its tracksuits unplanned week in the spotlight, the company swiftly replied to an inquiry from Fast Company, declining any comment. Sometimes when a brand finds its products placed in the middle of the cultural conversation, the best move is to just do nothing and wait quietly until the news moves on.
Category:
E-Commerce
A new insult for artificial intelligence just dropped thanks to Microsofts CEO. If you use Microsoft products, its near impossible to avoid AI now. The company is pushing AI agents deep into Windows, with every app, service, and product Microsoft has on the market now including some kind of AI integration, without the option to opt out. Microsoft CEO Satya Nadella recently shared a blog post to LinkedIn titled “Looking Ahead to 2026” offering an insight into the company’s focus for the new year. Spoiler alert: its AI. Nadella wrote that he wants users to stop thinking of AI as slop and start thinking of it as bicycles for the mind. Many took the post as a pushback against the popular insult slop often leveled at anything AI-generated, recently crowned Merriam-Webster’s word of the year for 2025. The internet saw Nadellas critique and raised him a new insult for anything AI, now dubbed Microslop. I will hereby be referring to Microsoft as MicroSlop for the rest of 2026, one X user posted in response to Nadellas words. The post currently has almost 200,000 views. The term subsequently trended across Instagram, Reddit, X and beyond. On X, @MrEwanMorrison wrote, A great example of the Streisand Effect in which telling people not to call AI slop is already backfiring and resulting in millions of people hearing the word for the first time and spreading it virally. A huge own goal from Microslop. Year of the Linux desktop, another X user posted. but not because of Linux. In a separate clip uploaded over the weekend, programmer Ryan Fleury demonstrates Microslop in action. At the start of the video, the settings page AI-powered search bar for Windows 11 recommends searching My mouse pointer is too small. Yet, when Fleury searches My mouse pointer is too small, word for word, nothing turns up. He waits around for a moment or two, but nothing loads. But when he looks up test afterwards, three results pop up. This is not a real company, Fleury wrote. He then added: AI writes 90% of our code!!!!, referring to claims made by Nadella that as much as 30% of the companys code is now written by artificial intelligence. Dont worry, we can tell.
Category:
E-Commerce
Over the course of 2025, deepfakes improved dramatically. AI-generated faces, voices, and full-body performances that mimic real people increased in quality far beyond what even many experts expected would be the case just a few years ago. They were also increasingly used to deceive people. For many everyday scenariosespecially low-resolution video calls and media shared on social media platformstheir realism is now high enough to reliably fool nonexpert viewers. In practical terms, synthetic media have become indistinguishable from authentic recordings for ordinary people and, in some cases, even for institutions. And this surge is not limited to quality. The volume of deepfakes has grown explosively: Cybersecurity firm DeepStrike estimates an increase from roughly 500,000 online deepfakes in 2023 to about 8 million in 2025, with annual growth nearing 900%. Im a computer scientist who researches deepfakes and other synthetic media. From my vantage point, I see that the situation is likely to get worse in 2026 as deepfakes become synthetic performers capable of reacting to people in real time. Just about anyone can now make a deepfake video. Dramatic improvements Several technical shifts underlie this dramatic escalation. First, video realism made a significant leap, thanks to video generation models designed specifically to maintain temporal consistency. These models produce videos that have coherent motion, consistent identities of the people portrayed, and content that makes sense from one frame to the next. The models disentangle the information related to representing a persons identity from the information about motion so that the same motion can be mapped to different identities, or the same identity can have multiple types of motions. These models produce stable, coherent faces without the flicker, warping, or structural distortions around the eyes and jawline that once served as reliable forensic evidence of deepfakes. Second, voice cloning has crossed what I would call the indistinguishable threshold. A few seconds of audio now suffice to generate a convincing clonecomplete with natural intonation, rhythm, emphasis, emotion, pauses, and breathing noise. This capability is already fueling large-scale fraud. Some major retailers report receiving over 1,000 AI-generated scam calls per day. The perceptual tells that once gave away synthetic voices have largely disappeared. Third, consumer tools have pushed the technical barrier almost to zero. Upgrades from OpenAIs Sora 2 and Googles Veo 3 and a wave of startups mean that anyone can describe an idea, let a large language model such as OpenAIs ChatGPT or Googles Gemini draft a script, and generate polished audio-visual media in minutes. AI agents can automate the entire process. The capacity to generate coherent, storyline-driven deepfakes at a large scale has effectively been democratized. This combination of surging quantity and personas that are nearly indistinguishable from real humans creates serious challenges for detecting deepfakes, especially in a media environment where peoples attention is fragmented and content moves faster than it can be verified. There has already been real-world harmfrom misinformation to targeted harassment and financial scamsenabled by deepfakes that spread before people have a chance to realize whats happening. AI researcher Hany Farid explains how deepfakes work and how good theyre getting. The future is real time Looking forward, the trajectory for next year is clear: Deepfakes are moving toward real-time synthesis that can produce videos that closely resemble the nuances of a humans appearance, making it easier for them to evade detection systems. The frontier is shifting from static visual realism to temporal and behavioral coherence: models that generate live or near-live content rather than pre-rendered clips. Identity modeling is converging into unified systems that capture not just how a person looks, but how they move, sound, and speak across contexts. The result goes beyond this resembles person X, to this behaves like person X over time. I expect entire video-call participants to be synthesized in real time; interactive AI-driven actors whose faces, voices, and mannerisms adapt instantly to a prompt; and scammers deploying responsive avatars rather than fixed videos. As these capabilities mature, the perceptual gap between synthetic and authentic human media will continue to narrow. The meaningful line of defense will shift away from human judgment. Instead, it will depend on infrastructure-level protections. These include secure provenance, such as media signed cryptographically, and AI content tools that use the Coalition for Content Provenance and Authenticity specifications. It will also depend on multimodal forensic tools such as my labs Deepfake-o-Meter. Simply looking harder at pixels will no longer be adequate. Siwei Lyu is a professor of computer science and engineering and diector of the UB Media Forensic Lab at the University at Buffalo. This article is republished from The Conversation under a Creative Commons license. Read the original article.
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||