|
|||||
AI is no longer the future of healthcare; its already reshaping how patients are diagnosed and treated. Some of the most interesting developments involve systems that sense and respond to human emotion. Cedars-Sinais Connect platform, for example, adapts care based on patient sentiment; CompanionMx interprets vocal and facial cues to detect anxiety; and Feel Therapeutics uses emotion-sensing wearables to tailor interventions in real time. At the same time, clinical tools are evolving. Hospitals are pairing large language models (LLMs) with AI note-taking apps such as Nabla and Heidi, which can listen, summarize, and respond to the nuances of doctorpatient conversations. Investment in medical scribing technologies alone hit around $800 million last year. A SHIFT TO AI ADAPTATION All of this points to a bigger shift from AI that automates tasks to AI that adapts. Traditional AI sped up paperwork and crunched data. Adaptive AI helps clinicians make better judgments, understand patients more deeply, and respond in context. You can already see this shift in breast cancer screening, genomics, and drug discovery, where high quality data and constant validation are driving real progress. Emotionally-aware tools, when designed responsibly, can strengthen the connection between clinicians and patients, personalize care, and ease pressure on overstretched systems. But as adaptive AI becomes more widely available, success depends less on technical brilliance and more on how systems are built. The tools that succeed will be able to flex around people, fitting patients needs, clinicians workflows, and the realities of care. Good AI needs to be anticipatory and sensitive to context, built for the full diversity of patients. Even the most empathetic AI cannot, of course, erase the imperfections of human systems. Recent studies, for example, show that medical AI tools and LLMbased assistants routinely downplay symptoms in women and treat Black and Asian patients with less empathy than for white men. AI does not cleanse the biases of the real world; it carries them forward and often widens their impact. We have seen this pattern before. DEPLOYMENT MATTERS Thats why deployment conditions matter as much as technology. A system that mimics empathy does not automatically grasp nuance, context, or risk. Without firm ethical boundaries, so-called emotional intelligence can give a false sense of security. Clinicians still need to make the final calls, protecting patients and maintaining trust. AI can be a helpful care partner, but it cannot take on the weight of human responsibility. Building trust requires strengthening the foundations on which it is used. Involving patients, families, and carers from the start surfaces blind spots early and helps balance compassion with practicality. It also clarifies where automation should step back and human care needs to step in. Our Cancer Platform, developed with the Cancer Awareness Trust, illustrates this in practice, showing how empathetic design creates dependable, genuinely helpful tools. AI isnt here to replace people. Its here to support them in their expertise and scale their impact. Ideally we will build machines to handle complexity and pattern recognition, freeing clinicians to focus on what humans do best: exercise judgement, build connection, and provide care. Machines might learn to care, but it is up to us to create the ecosystem where that care is trustworthy, fair, and meaningfula challenge, yes, but one full of opportunity. Nicki Sprinz is CEO of ustwo.
Category:
E-Commerce
Beijing imposed sanctions on Friday against 20 U.S. defense-related companies and 10 executives, a week after Washington announced large-scale arms sales to Taiwan.The sanctions entail freezing the companies’ assets in China and banning individuals and organizations from dealing with them, according to the Chinese foreign ministry.The companies include Northrop Grumman Systems Corporation, L3Harris Maritime Services and Boeing in St. Louis, while defense firm Anduril Industries founder Palmer Luckey is one of the executives sanctioned, who can no longer do business in China and are barred from entering the country. Their assets in the East Asian country have also been frozen.The announcement of the U.S. arms-sale package, valued at more than $10 billion, has drawn an angry response from China, which claims Taiwan as its own and says it must come under its control.If approved by the American Congress, it would be the largest-ever U.S. weapons package to the self-ruled territory.“We stress once again that the Taiwan question is at the very core of China’s core interests and the first red line that must not be crossed in ChinaU.S. relations,” the Chinese foreign ministry said in a statement on Friday. “Any company or individual who engages in arms sales to Taiwan will pay the price for the wrongdoing.”The ministry also urged the U.S. to stop what it called “the dangerous moves of arming Taiwan.”Taiwan is a major flashpoint in U.S.-China relations that analysts worry could explode into military conflict between the two powers. China says that the U.S. arms sales to Taiwan would violate diplomatic agreements between China and the U.S.China’s military has increased its presence in Taiwan’s skies and waters in the past few years, holding joint drills with its warships and fighter jets on a near-daily basis near the island.Under the American federal law, the U.S. is obligated to assist Taiwan with its self-defense, a point that has become increasingly contentious with China. Beijing already has strained ties with Washington over trade, technology and other human rights issues. Associated Press
Category:
E-Commerce
For the past three years, AIs breakout moment has happened almost entirely through text. We type a prompt, get a response, and move to the next task. While this intuitive interaction style turned chatbots into a household tool overnight, it barely scratches the surface of what the most advanced technology of our time can actually do. This disconnect has created a significant gap in how consumers utilize AI. While the underlying models are rapidly becoming multimodalcapable of processing voice, visuals, and video in real timemost consumers are still using them as a search engine. Looking toward 2026, I believe the next wave of adoption wont be about utility alone, but about evolving beyond static text into dynamic, immersive interactions. This is AI 2.0: not just retrieving information faster, but experiencing intelligence through sound, visuals, motion, and real-time context. AI adoption has reached a tipping point. In 2025, ChatGPTs weekly user base doubled from roughly 400 million in February to 800 million by years end. Competitors like Gemini and Anthropic saw similar growth, yet most users still engage with LLMs primarily via text chatbots. In fact, Deloittes Connected Consumer Survey shows that despite over half (53%) of consumers experimenting with generative AI, most people still relegate AI to administrative tasks like writing, summarizing, and researching. Yet when you look at the digital behavior of consumers outside of AI, its clear consumers crave immersive experiences. According to Activate Consultings Tech & Media Outlook 2026, 43% of Gen Z prefer user-generated platforms like TikTok and YouTube over traditional TV or paid streaming, and they spend 54% more time on social video platforms than the average consumer, abandoning traditional media for interactive social platforms. This creates a fundamental mismatch: Consumers live in a multi-sensory world, but their AI tools are stuck delivering plain text. While the industry recognizes this gap and is investing to close it, I predict well see a fundamental shift in how people use and create with AI. In AI 2.0, users will no longer simply consume AI-generated content but will instead leverage multimodal AI to bring voice, visuals, and text together, allowing them to shape and direct their experiences in real time. MULTIMODAL AI UNLOCKS IMMERSIVE STORYTELLING If AI 1.0 was about efficiency, AI 2.0 is about engagement. While text-based AI is limited in how deeply it can engage audiences, multimodal AI allows the user to become an active participant. Instead of reading a story, you can interact with a main character and take the plot in a new direction or build your own world where narratives and characters evolve with you. We can look to the $250 billion gaming industry as the blueprint for the potential that multimodal AI has. Video games combine visuals, audio, narrative, and real-time agency, creating an immersive experience that traditional entertainment cant replicate. Platforms like Roblox and Minecraft let players inhabit content. Roblox alone reaches over 100 million daily users, who collectively spend tens of billions of hours a year immersed in these worlds; engagement that text alone could never generate. With the rise of multimodal AI, users everywhere will be able to create these types of experiences theyve loved to participate in through gaming. By removing technical barriers, multimodal allows everyone to build experiences that not only feel authentic to the real world but also actively participate in them. Legacy media is also responding to this trend. Disney recently announced a $1 billion investment in OpenAI and a licensing deal that will let users create short clips with characters from Marvel, Pixar, and Star Wars through the Sora platform. WHY MULTIMODAL AI CAN BE SAFER FOR YOUNGER USERS As AI becomes part of everyday life, safetyparticularly for younger usershas become one of the most critical issues facing the industry. Moving from open-ended chat to structured, multimodal worlds allows us to design guardrails within the gameplay. Instead of relying on continuous unstructured prompts, these environments are built around characters, visuals, voices, and defined story worlds. Interaction is guided by the experience itself. That structure changes how and where safety is designed into the system. Educational AI demonstrates this approach. Platforms like Khan Academy Kids and Duolingo combine visuals, audio, and structured prompts to guide learning. The AI isnt trying to be everything; it focuses on one task well. As multimodal AI evolves, one of its most meaningful opportunities may be this ability to balance creative freedom with thoughtful constraint. AI 2.0 presents a design shift that could give builders, educators, and families new ways to shape safer, more intentional digital spaces for the next generation. WHY MULTIMODAL AI IS THE NEXT FRONTIER In 2026, I predict that consumers wont be prompting AI; it will be a more immersive interactive experience. This excites me because users wont just passively receive outputs; theyll actively shape experiences and influence how AI evolves in real time. We could see users remixing the series finale of their favorite TV show, or students learning history not by reading a textbook, but by actively debating a historically accurate AI simulation. For founders and creators, the next step is to stop building tools only for efficiency and start building environments for immersion and exploration. The winners of the next cycle wont be the ones with the smartest models, but the ones who make AI feel less like a utility and more like a destination for rich, interactive experiences. Karandeep Anand is CEO of Character.AI
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||