Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2025-11-20 17:00:00| Fast Company

Welcome to AI Decoded, Fast Companys weekly newsletter that breaks down the most important news in the world of AI. Im Mark Sullivan, a senior writer at Fast Company, covering emerging tech, AI, and tech policy. This week, Im focusing on gathering some informed opinions from people trying out Googles new Gemini 3 Pro AI model. I also look at another circular AI investment agreement.  Sign up to receive this newsletter every week via email here. And if you have comments on this issue and/or ideas for future ones, drop me a line at sullivan@fastcompany.com, and follow me on X (formerly Twitter) @thesullivan.  What smart people are saying about Google’s Gemini 3 The so-called generative AI boom is only about three years old. It has been characterized by some breakthrough moments, chief among them the release of OpenAIs ChatGPT in late 2022. A relatively small number of AI labs have been competing to release frontier models that beat all others. The top spots on the rankings of benchmark test scores seem to change names every six months or so. But the release of Googles new flagship Gemini 3 Pro model (in preview), along with its impressive benchmark test scores, seem like a moment well remember.  Now, peoplemany of them developerswho immediately began testing the new model are beginning to weigh in on how well Gemini 3 performs in real-world use. Here is a selection of their impressions.  Thumbs-up Gemini 3 . . . shows significant gains in reasoning, reliability in multi-step agent workflows, and an ability to debug tough development tasks with high-quality fixes. In early evaluations, it improved Warps Terminal Bench state of the art score by 20%. Zach Lloyd, CEO of Warp  I simply asked Gemini 3 Pro to diagnose and fix its own code. It reasoned through the problem in exactly the way I would expect from a thoughtful junior engineer game developer Josh English on MediumBest creative and professional writing Ive seen. I dont code, so thats off my radar, but for me the vibes are excellent. Intelligence, nuance, flexibility, and originality are promising in that distinct way that excites and disturbs me. Havent had this feeling since 11/30/22. Brett Cooper on X Gemini beat my own previously unbeaten personal test. The test involves a fairly long list of accurate to year information, ordered properly, many opportunities for hallucination and then used to achieve a goal. I need a new test, so yeah I think Geminis impressive. Richard Knoche on X Its a great model, as far as LLMs go, topping most benchmarks, but its certainly not AGI. Its haunted by the same kind of problems that all earlier models have had. Hallucinations and unreliability persist. Visual and physical reasoning are still a mess. In short, scaling isnt getting us to AGI. AI skeptic Gary Marcus on Medium I found the answer, and its actually terrifying (in a good way) The uptake of Gemini has been wild We are talking about a model that is delivering rich visuals, deeper interactivity, and agentic vibe coding. CodeToDeploy on Medium [We] created the first open-source evaluation framework to test how leading AI models respond to self-harm and mental health crisis scenarios, and the results were alarming, said Sean Dadashi, cofounder of the AI journaling app company Rosebud. Gemini 3 [is] the safest AI model weve seen yet. Thumbs-down Thoroughly benchmaxxed (optimized to do well on benchmark tests), very mid model. Makes so much errors, I have strong doubts they are serving the model they ran the benchmarks on. Infrecursion on X My experience with the Gemini [command line interface] has been dreadful. It craps out at least half of the time. When it works it is ridiculously fast so I keep trying it. But it has proven very inferior to the Claude code experience in my usage. Reddit user dinkleberg Was much worse than GPT-5.1 for find me research on [x]-type queries. It kept trying to do my thinking (synthesis) for me, which is not what I want from it. It gave me individual research results if I explicitly asked but even then it seemed to go way less wide than GPT-5.1. Robert Mushkatblat on X  The free-for-all continues: Anthropic takes $15 billion from Microsoft, Nvidia, with strings  Someone on Twitter (X) said it best (Peter Wildeford): After the breakup, both Microsoft and OpenAI are seeing different people . . . also Nvidia is sleeping with everybody.In other words, after doubling down on its bet on OpenAI, Microsoft has begun investing in other AI model developers too. Anthropic is the most recent.   On Tuesday, Anthropic announced it would be taking a new $5 billion in investment money from Microsoft and $10 billion from Nvidia. As part of the deal, Anthropic will purchase about $30 billion of compute capacity from Microsofts Azure cloud service, which is powered by Nvidia chips. Anthropic says its now the first frontier model to be available within all three major cloud servicesMicrosoft Azure, Amazons AWS, and Google Cloud. Anthropic will work with Nvidia to optimize its models to run well on Nvidia chips. In September, Anthropic raised another $13 billion in funding, after which it was valued at $183 billion.So, as it has done with OpenAI, Microsoft becomes both an investor in, and a major supplier to, Anthropic. In a less direct way, so does Nvidia. Its the latest example of the kind of circular financial arrangements that have become commonplace in the world of big AI.  In September, Nvidia announced a $100-billion investment in OpenAI, which the chip supplier will pay in installments that are contingent on OpenAI buying a certain number of chips from Nvidia. So Nvidia gets guaranteed chip sales and a 2% share of OpenAI. These investments might be circular and raise related party concerns, as Nvidia may own shares in a customer that will likely use such funds to buy more Nvidia gear, Morningstar equity analyst Brian Colello wrote at the time.   OpenAI struck a similar deal with Nvidia rival AMD in early October. OpenAI agreed to buy large quantities of AMDs Instinct AI chips on a set schedule over the next decade. If it keep to the schedule, itll get the option of buying a 10% stake in AMD.  The group of companies pouring billions into the coming AI revolution isnt really getting any bigger, and many of the participants seem to be placing bets on each other. The big spenders are Nvidia, Microsoft, Oracle, Meta, Google, Amazon, and some big financiers like Softbank. A larger and more diverse set of players would give confidence that the burgeoning AI industry isnt just a big hype bubble.  Survey: Trust in AI is breaking along class divides Edelman is out with a new survey report, titled Trust in AI report, AI Trust at a Crossroads, which finds that humans trust of AI isnt growing quickly, and that trust levels break along class lines. Edelman surveyed 5,000 people in the U.S., Brazil, China, Germany, and the U.K. More than half of low-income respondents (54%) feel theyll be left out and left behind in the move toward AI, the survey finds. Thats compared to the 44% of middle-income, and the 31% of high-income people.  The study found a strong connection between higher trust in AI tools and higher usage of them. Low trust in AI stems from worries over how the systems will use and whether they will protect personal data. People, especially in developed countries, also worry about how they might be manipulated by AI.  At work, only a quarter of non-managers use AI tools weekly, versus 63% of managers. Tech (55%) and finance (43%) employees are most open to AI at work, while adoption is lowest in healthcare (28%), education (25%), food & beverage (23%), and transportation (20%).  Across all countries, 62% of younger people ages 18 to 34 say they generally trust AI, while 57% of people ages 3554, and 40% of people 55 and older say they trust it. Interestingly, only 40% of 18-to-34 year olds in the U.S. say they trust AI.    More AI coverage from Fast Company:  Gemini 3 may be the moment Google pulls away in the AI arms race Misinformation sites have an open-door policy for AI scrapers AI browsers need the open web. So why are they trying to kill it? A battle against the AI oligarchy is brewing in this wealthy New York district Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.


Category: E-Commerce

 

LATEST NEWS

2025-11-20 16:30:00| Fast Company

A 1940 self-portrait by famed Mexican artist Frida Kahlo of her asleep in a bed could make history Thursday when it goes on sale by Sothebys in New York. With an estimated price of $40 million to $60 million, El sueo (La cama) in English, The Dream (The Bed) may surpass the top price for a work by any female artist when it goes under the hammer. That record currently stands at $44.4 million, paid at Sothebys in 2014 for Georgia OKeeffes Jimson Weed/White Flower No. 1. The highest price at auction for a Kahlo work is $34.9 million, paid in 2021 for Diego and I, depicting the artist and her husband, muralist Diego Rivera. Her paintings are reported to have sold privately for even more. The painting up for auction depicts Kahlo asleep in a wooden colonial-style bed, wrapped in a golden blanket embroidered with crawling vines and leaves. Above her, seemingly levitating atop the bedposts, lies a full-sized skeleton. In its catalog note, Sotheby’s said the painting offers a spectral meditation on the porous boundary between sleep and death. Last exhibited publicly in the late 1990s, the painting is the star of a sale of more than 100 surrealist works by artists including Salvador Dalí, René Magritte, Max Ernst, and Dorothea Tanning. They are from a private collection whose owner has not been disclosed. Kahlo vibrantly and unsparingly depicted herself and events from her life, which was upended by a bus accident at 18. She started to paint while bedridden, underwent a series of painful surgeries on her damaged spine and pelvis, then wore casts until her death in 1954 at age 47. The suspended skeleton is often interpreted as a visualization of her anxiety about dying in her sleep, a fear all too plausible for an artist whose daily existence was shaped by chronic pain and past trauma, the catalog notes.


Category: E-Commerce

 

2025-11-20 16:00:00| Fast Company

Theyre cute, even cuddly, and promise learning and companionshipbut artificial intelligence toys are not safe for kids, according to childrens and consumer advocacy groups urging parents not to buy them during the holiday season. These toys, marketed to kids as young as 2 years old, are generally powered by AI models that have already been shown to harm children and teenagers, such as OpenAIs ChatGPT, according to an advisory published Thursday by the childrens advocacy group Fairplay and signed by more than 150 organizations and individual experts such as child psychiatrists and educators. The serious harms that AI chatbots have inflicted on children are well-documented, including fostering obsessive use, having explicit sexual conversations, and encouraging unsafe behaviors, violence against others, and self-harm, Fairplay said. AI toys, made by companies such as Curio Interactive and Keyi Technologies, are often marketed as educational, but Fairplay says they can displace important creative and learning activities. They promise friendship but also disrupt childrens relationships and resilience, the group said. Whats different about young children is that their brains are being wired for the first time and developmentally it is natural for them to be trustful, for them to seek relationships with kind and friendly characters, said Rachel Franz, director of Fairplays Young Children Thrive Offline Program. Because of this, she added, the amount of trust young children are putting in these toys can exacerbate the harms seen with older children. Fairplay, a 25-year-old organization formerly known as the Campaign for a Commercial-Free Childhood, has been warning about AI toys for more than 10 years. They just werent as advanced as they are today. A decade ago, during an emerging fad of internet-connected toys and AI speech recognition, the group helped lead a backlash against Mattels talking Hello Barbie doll that it said was recording and analyzing childrens conversations. Everything has been released with no regulation and no research, so it gives us extra pause when all of a sudden we see more and more manufacturers, including Mattel, who recently partnered with OpenAI, potentially putting out these products, Franz said. Its the second big seasonal warning against AI toys since consumer advocates at U.S. PIRG last week called out the trend in its annual Trouble in Toyland report that typically looks at a range of product hazards, such as high-powered magnets and button-sized batteries that young children can swallow. This year, the organization tested four toys that use AI chatbots. We found some of these toys will talk in-depth about sexually explicit topics, will offer advice on where a child can find matches or knives, act dismayed when you say you have to leave, and have limited or no parental controls, the report said. One of the toys, a teddy bear made by Singapore-based FoloToy, was later withdrawn, its CEO told CNN this week. Dr. Dana Suskind, a pediatric surgeon and social scientist who studies early brain development, said young children don’t have the conceptual tools to understand what an AI companion is. While kids have always bonded with toys through imaginative play, when they do this they use their imagination to create both sides of a pretend conversation, practicing creativity, language, and problem-solving, she said. An AI toy collapses that work. It answers instantly, smoothly, and often better than a human would. We dont yet know the developmental consequences of outsourcing that imaginative labor to an artificial agentbut its very plausible that it undercuts the kind of creativity and executive function that traditional pretend play builds, Suskind said. Beijing-based Keyi, maker of an AI petbot called Loona, didnt return requests for comment this week, but other AI toymakers sought to highlight their child safety protections. California-based Curio Interactive makes stuffed toys, like Gabbo and rocket-shaped Grok, that have been promoted by the pop singer Grimes. Curio said it has meticulously designed guardrails to protect children and the company encourages parents to monitor conversations, track insights, and choose the controls that work best for their family. “After reviewing the U.S. PIRG Education Funds findings, we are actively working with our team to address any concerns, while continuously overseeing content and interactions to ensure a safe and enjoyable experience for children. Another company, Miko, based in Mumbai, India, said it uses its own conversational AI model rather than relying on general large language model systems such as ChatGPT in order to make its productan interactive AI robotsafe for children. We are always expanding our internal testing, strengthening our filters, and introducing new capabilities that detect and block sensitive or unexpected topics,” said CEO Sneh Vaswani. These new features complement our existing controls that allow parents and caregivers to identify specific topics theyd like to restrict from conversation. We will continue to invest in setting the highest standards for safe, secure and responsible AI integration for Miko products. Mikos products are sold by major retailers such as Walmart and Costco and have been promoted by the families of social media kidfluencers whose YouTube videos have millions of views. On its website, it markets its robots as Artificial Intelligence. Genuine friendship. Ritvik Sharma, the company’s senior vice president of growth, said Miko actually encourages kids to interact more with their friends, to interact more with the peers, with the family members etc. Its not made for them to feel attached to the device only. Still, Suskind and children’s advocates say analog toys are a better bet for the holidays. “Kids need lots of real human interaction. Play should support that, not take its place. The biggest thing to consider isnt only what the toy does; its what it replaces. A simple block set or a teddy bear that doesnt talk back forces a child to invent stories, experiment, and work through problems. AI toys often do that thinking for them,” she said. Heres the brutal irony: when parents ask me how to prepare their child for an AI world, unlimited AI access is actually the worst preparation possible. Barbara Ortutay and Matt O’Brien, AP technology writers


Category: E-Commerce

 

Latest from this category

20.11Why Trumps AI diplomacy is doomed to fail
20.11Vance tells Americans to have a little bit of patience on high prices
20.11Why MLB is suddenly betting big on NBC and Netflix
20.11FICO and Plaid launch a new credit score powered by real-time cash flow data
20.11Black Friday 2025 boycotts: Mass Blackout and We Aint Buying It protests will target Trump and billionaires. Heres what to know
20.11The future of cola? Pepsi unveils prebiotic version with only 30 calories and no artificial sweeteners
20.11Real estate investors are increasingly turning to the invisible housing market to find deals
20.11U.S. employers added 119,000 jobs in September, delayed report says
E-Commerce »

All news

20.11Tomorrow's Earnings/Economic Releases of Note; Market Movers
20.11Bull Radar
20.11Bear Radar
20.11When Market Strength Turns into Relentless Selling
20.11Vance tells Americans to have a little bit of patience on high prices
20.11Why Trumps AI diplomacy is doomed to fail
20.11Afternoon Market Internals
20.11Why MLB is suddenly betting big on NBC and Netflix
More »
Privacy policy . Copyright . Contact form .