|
|||||
CES is a show that’s all about the future. Usually, that future is within the next year or two. Companies show off products to kick off marketing campaigns and begin building consumer demand. Sometimes, though, they offer a peek a good bit further down the road. Several prototypes at this year’s CES offered clues about how companies expect the consumer electronics world to evolve. Many, of course, will fall by the wayside. Almost all of them will experience changes before getting anywhere close to market. Despite that, though, they offer a look into a consumer electronics crystal ball. Here are some trends they’re prophesizing for the years to come. Smart watches will get a lot more useful and easier to repair Smart watches already do a lot. They free up users’ hands, letting them check messages, see who is calling them without fumbling for their phone, track health data, and can act as a lifeline if you’re stranded. They’re good for opening hotel room doors, but they’re generally not seen as being secure enough for something like a banking or access system. Cambridge Consultants, however, showcased a prototype luxury watch that also doubles as a digital passkey. The rotary bezel (the rotating ring with markings most often seen on dive watches) utilizes extreme miniaturization to boost security components. At that same demo: a prototype smart watch designed to let consumers repair the device itself without sacrificing the aesthetics. Augmented reality will ditch the cameras Eye tracking, at present, requires a camera. But another prototype being shown by Cambridge Consultants did away with the lens, using a photonics and sensor fusion instead. That could be the push AR needs to gain wider acceptance, as it could make headsets significantly smaller and more comfortable. TVs are about to be a lot brighter This upcoming trend is a lot closer than some of the others. Both Samsung and TCL were showcasing TV sets that blast out the colors, utilizing next-generation backlighting called RGB LED, the latest in the alphabet soup mishmash of backlighting names (which also includes QLED, OLED, LED, Mini LED, and more). The colors pop like never before, but the screens are also significantly brighter to the extent that if you’re too close, you might find yourself squinting. The Samsung prototype reached a brightness of 4,500-nits. That’s about twice the level of current high end TVs. Position sensing could be the next battleground As the robotics industry continues to grow and nudge its way into homes and businesses, it’s going to be crucial for positioning software to be as precise as possible. (It’s fun to watch a robot dance, but a lot less fun when it hits you full force while showcasing its moves.) This year’s CES showed off a number of new position sensing technologies, from Lego’s smart bricks, which incorporate position sensing into play, to a prototype architecture that shrinks the footprint of unidirectional position sensing. That could open the door to adding position sensing to devices where it currently can’t be used — while also ensuring your housebot doesn’t accidentally pop you with a right hook as it takes care of your laundry.
Category:
E-Commerce
A new insult for artificial intelligence just dropped thanks to Microsofts CEO. If you use Microsoft products, its near impossible to avoid AI now. The company is pushing AI agents deep into Windows, with every app, service, and product Microsoft has on the market now including some kind of AI integration, without the option to opt out. Microsoft CEO Satya Nadella recently shared a blog post to LinkedIn titled “Looking Ahead to 2026” offering an insight into the company’s focus for the new year. Spoiler alert: its AI. Nadella wrote that he wants users to stop thinking of AI as slop and start thinking of it as bicycles for the mind. Many took the post as a pushback against the popular insult slop often leveled at anything AI-generated, recently crowned Merriam-Webster’s word of the year for 2025. The internet saw Nadellas critique and raised him a new insult for anything AI, now dubbed Microslop. I will hereby be referring to Microsoft as MicroSlop for the rest of 2026, one X user posted in response to Nadellas words. The post currently has almost 200,000 views. The term subsequently trended across Instagram, Reddit, X and beyond. On X, @MrEwanMorrison wrote, A great example of the Streisand Effect in which telling people not to call AI slop is already backfiring and resulting in millions of people hearing the word for the first time and spreading it virally. A huge own goal from Microslop. Year of the Linux desktop, another X user posted. but not because of Linux. In a separate clip uploaded over the weekend, programmer Ryan Fleury demonstrates Microslop in action. At the start of the video, the settings page AI-powered search bar for Windows 11 recommends searching My mouse pointer is too small. Yet, when Fleury searches My mouse pointer is too small, word for word, nothing turns up. He waits around for a moment or two, but nothing loads. But when he looks up test afterwards, three results pop up. This is not a real company, Fleury wrote. He then added: AI writes 90% of our code!!!!, referring to claims made by Nadella that as much as 30% of the companys code is now written by artificial intelligence. Dont worry, we can tell.
Category:
E-Commerce
Over the course of 2025, deepfakes improved dramatically. AI-generated faces, voices, and full-body performances that mimic real people increased in quality far beyond what even many experts expected would be the case just a few years ago. They were also increasingly used to deceive people. For many everyday scenariosespecially low-resolution video calls and media shared on social media platformstheir realism is now high enough to reliably fool nonexpert viewers. In practical terms, synthetic media have become indistinguishable from authentic recordings for ordinary people and, in some cases, even for institutions. And this surge is not limited to quality. The volume of deepfakes has grown explosively: Cybersecurity firm DeepStrike estimates an increase from roughly 500,000 online deepfakes in 2023 to about 8 million in 2025, with annual growth nearing 900%. Im a computer scientist who researches deepfakes and other synthetic media. From my vantage point, I see that the situation is likely to get worse in 2026 as deepfakes become synthetic performers capable of reacting to people in real time. Just about anyone can now make a deepfake video. Dramatic improvements Several technical shifts underlie this dramatic escalation. First, video realism made a significant leap, thanks to video generation models designed specifically to maintain temporal consistency. These models produce videos that have coherent motion, consistent identities of the people portrayed, and content that makes sense from one frame to the next. The models disentangle the information related to representing a persons identity from the information about motion so that the same motion can be mapped to different identities, or the same identity can have multiple types of motions. These models produce stable, coherent faces without the flicker, warping, or structural distortions around the eyes and jawline that once served as reliable forensic evidence of deepfakes. Second, voice cloning has crossed what I would call the indistinguishable threshold. A few seconds of audio now suffice to generate a convincing clonecomplete with natural intonation, rhythm, emphasis, emotion, pauses, and breathing noise. This capability is already fueling large-scale fraud. Some major retailers report receiving over 1,000 AI-generated scam calls per day. The perceptual tells that once gave away synthetic voices have largely disappeared. Third, consumer tools have pushed the technical barrier almost to zero. Upgrades from OpenAIs Sora 2 and Googles Veo 3 and a wave of startups mean that anyone can describe an idea, let a large language model such as OpenAIs ChatGPT or Googles Gemini draft a script, and generate polished audio-visual media in minutes. AI agents can automate the entire process. The capacity to generate coherent, storyline-driven deepfakes at a large scale has effectively been democratized. This combination of surging quantity and personas that are nearly indistinguishable from real humans creates serious challenges for detecting deepfakes, especially in a media environment where peoples attention is fragmented and content moves faster than it can be verified. There has already been real-world harmfrom misinformation to targeted harassment and financial scamsenabled by deepfakes that spread before people have a chance to realize whats happening. AI researcher Hany Farid explains how deepfakes work and how good theyre getting. The future is real time Looking forward, the trajectory for next year is clear: Deepfakes are moving toward real-time synthesis that can produce videos that closely resemble the nuances of a humans appearance, making it easier for them to evade detection systems. The frontier is shifting from static visual realism to temporal and behavioral coherence: models that generate live or near-live content rather than pre-rendered clips. Identity modeling is converging into unified systems that capture not just how a person looks, but how they move, sound, and speak across contexts. The result goes beyond this resembles person X, to this behaves like person X over time. I expect entire video-call participants to be synthesized in real time; interactive AI-driven actors whose faces, voices, and mannerisms adapt instantly to a prompt; and scammers deploying responsive avatars rather than fixed videos. As these capabilities mature, the perceptual gap between synthetic and authentic human media will continue to narrow. The meaningful line of defense will shift away from human judgment. Instead, it will depend on infrastructure-level protections. These include secure provenance, such as media signed cryptographically, and AI content tools that use the Coalition for Content Provenance and Authenticity specifications. It will also depend on multimodal forensic tools such as my labs Deepfake-o-Meter. Simply looking harder at pixels will no longer be adequate. Siwei Lyu is a professor of computer science and engineering and diector of the UB Media Forensic Lab at the University at Buffalo. This article is republished from The Conversation under a Creative Commons license. Read the original article.
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||