Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2025-08-20 13:28:19| Fast Company

Its rare for a tech titan to show any weakness or humanity. Yet even OpenAIs notoriously understated CEO Sam Altman had to admit this week that the rollout of the companys new GPT-5 Large Language Model was a complete disaster. We totally screwed up, Altman admitted in an interview with The Verge. I agree. As a former OpenAI Beta testerand someone who currently spends over $1,000 per month on OpenAIs APIIve eagerly anticipated the launch of GPT-5 for over a year. When it finally arrived, though, the model was a mess. In contrast to the companys previous GPT-4 series of models, GPT-5s responses feel leaden, cursory, and boring. The new model also makes dumb mistakes on simple tasks and generates shortened answers to many queries. Why is GPT-5 so awful? Its possible that OpenAI hobbled its new model as a cost-cutting measure.  But I have a different theory. GPT-5 completely lacks emotional intelligence. And its inability to understand and replicate human emotion cripples the modelespecially on any task requiring nuance, creativity or a complex understanding of what makes people tick. Getting Too Attached When OpenAI launched its GPT-4 model in 2023, researchers immediately noted its outstanding ability to understand people. An updated version of the model (dubbed GPT 4.5 and released in early 2025) showed even higher levels of emotional intelligence and creativity. Initially, OpenAI leaned into its models talent for understanding people, using terms cribbed from the world of psychology to describe the models update. Interacting with GPT4.5 feels more natural. Its broader knowledge base, improved ability to follow user intent, and greater EQ make it useful for tasks like improving writing, programming, and solving practical problems, OpenAI wrote in the models release notes, subtly dropping in a common psychological term used to measure a persons emotional intelligence. Soon, though, GPT-4s knack for human-like emotional understanding took a more concerning turn. Plenty of people used the model for mundane office tasks, like writing code and interpreting spreadsheets. But a significant subset of users put GPT-4 to a different use, treating it like a companionor even a therapist. In early 2024, studies showed that GPT-4 provided better responses than many human counselors. People began to refer to the model as a friendor even treat it as a confidant or lover.  Soon, articles began appearing in major news sources like the New York Times about people using the chatbot as a practice partner for challenging conversations, a stand-in for human companionship, or even an aide for counseling patients. This new direction clearly spooked OpenAI.  As Altman pointed out in a podcast interview, conversations with human professionals like lawyers and therapists often involve strong privacy and legal protections. The same may not be true for intimate conversations with chatbots like GPT-4. Studies have also shown that chatbots can make mistakes when providing clinical advice, potentially harming patients. And the bots tendency to keep users talkingoften by reinforcing their beliefscan lead vulnerable patients into a state of AI psychosis, where the chatbot inadvertently validates their delusions and sends them into a dangerous emotional spiral. Shortly after the GPT-5 launch, Altman discussed this at length in a post on the social network X. People have used technology including AI in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that, Altman wrote. We value user freedom as a core principle, but we also feel responsible in how we introduce new technology with new risks. Altman went on to acknowledge that a lot of people effectively use ChatGPT as a sort of therapist or life coach. While this can be really good, Altman admitted that it made him deeply uneasy.  In his words, if …users have a relationship with ChatGPT where they think they feel better after talking but theyre unknowingly nudged away from their longer term well-being (however they define it), thats bad. Lobotimize the Bot To avoid that potentially concerningand legally damagingdirection, OpenAI appears to have deliberately dialed back its bots emotional intelligence with the launch of GPT-5.  The release notes for the new model say that OpenAI has taken steps towards minimizing sycophancytech speak for making the bot less likely to reinforce users beliefs and tell them what they want to hear. OpenAI also says that GPT-5 errors on the side of safe completionsgiving vague or high-level responses to queries that re potentially damaging, rather than refusing to answer them or risking a wrong or harmful answer. OpenAI also writes that GPT-5 is less effusively agreeable, and that in training it, the company gave the bot example prompts that led it to agree with users and reinforce their beliefs, and then taught it not to do that. In effect, OpenAI appears to have lobotomized the botpotentially removing or reconfiguring, through training and negative reinforcement, the parts of its virtual brain that handles many of the emotional aspects of its interactions with users. This may have seemed fine in early testingmost AI benchmarks focus on productivity-centered tasks like solving complex math problems and writing Python code, where emotional intelligence isnt necessary.  But as soon as GPT-5 hit the real world, the problems with tweaking GPT-5s emotional center became immediately obvious. Users took to social media to share how the switch to GPT-5 and the loss of the GPT-4 model felt like losing a friend. Longtime fans of OpenAI bemoaned the cold tone of GPT-5, its curt and business-like responses, and the loss of an ineffable spark that made GPT-4 a powerful assistant and companion. Emotion Matters Even if you dont use ChatGPT as a pseudo therapist or friend, the bots emotional lobotomy is a huge issue. Creative tasks like writing and brainstorming require emotional understanding.  In my own testing, Ive found GPT-5 to be a less compelling writer, a worse idea generator, and a terrible creative companion. If I asked GPT-4 to research a topic, I could watch its chain of reasoning as it carefully considered my motivations and needs before providing a response. Even with Thinking mode enabled, GPT-5 is much more likely to quickly spit out a fast, cursory response to my query, or to provide a response that focuses solely on the query itself and ignores the human motivations of the person behind it. With the right prompting, GPT-4 could generate smart, detailed, nuanced articles or research reports that I would actually want to read. GPT-5 feels more like interacting with a search engine, or reading text written in the dull prose of a product manual. To be fair, for enterprise tasks like quickly writing a web app or building an AI agent, GPT-5 excels. And to OpenAIs credit, use of its APIs appears to have increased since the GPT-5 launch. Still, for many creative tasksand for many users outside the enterprise spaceGPT-5 is a major backslide. OpenAI appears genuinely blindsided by the anger many users felt about the GPT-5 rollout and the bots apparent emotional stuntedness. OpenAI leader Nick Turley admitted to the Verge that the degree to which people had such strong feelings about a particular modelwas certainly a surprise to me. Turley went on to say that the level of passion users have for specific models is quite remarkable and thatin a truly techie bit of word choiceit recalibrated his thinking about the process of releasing new models, and the things OpenAI owes its long-time users. The company now seems to be aggressively rolling back elements of the GPT-5 launchrestoring access to the old GPT-4 model, making GPT-5 warmer and friendlier, and giving users more control over how the new model processes queries. Admitting when youre wrong, psychologists say, is a hallmark of emotional intelligence. Ironically, Altmans response to the GPT-5 debacle demonstrates rare emotional nuance, at the exact moment that this company is pivoting away from such things. OpenAI could learn a thing or two from its leader. Whether youre a CEO navigating a disastrous rollout or a chatbot conversing with a human user, theres a simple yet essential lesson to forget at your peril: emotion matters.


Category: E-Commerce

 

LATEST NEWS

2025-08-20 13:15:00| Fast Company

Claires has found a buyer just two weeks after filing for Chapter 11 bankruptcy protection. It announced on Wednesday, August 20, that it plans to sell its North America business and IP to Ames Watson, a private equity firm. Courts in the U.S. and Canada must approve the sale for it to proceed. The company began bankruptcy proceedings on August 6 with $690 million in debt. However, Claires hasnt disclosed the amount Ames Wilson would pay for the assets. It did state that the sale will significantly benefit its attempt to create value during restructuring. Finding a buyer has been a critical goal for Claires. At the time of filing, Claires CEO Chris Cramer said the company was in active discussions with potential strategic and financial partners to find alternatives to shutting down stores. Claires had claimed its North American stores would stay open during bankruptcy proceedings, but named 18 locations across the country that would likely close soon. It said another 1,236 stores could close by October 31 if the company didnt find a buyer in time. In light of the agreement, Claire’s has paused the liquidation process at a significant number of stores. The stores that will stay open could total as many as 950 in North America, though some stores in the region will continue with liquidation. We are pleased to have the opportunity to partner with Claire’s and support the next chapter for this iconic brand, Ames Watson CEO Lawrence Berger said in a statement. We are committed to investing in its future by preserving a significant retail footprint across North America, working closely with the Claire’s team to ensure a seamless transition and creating a renewed path to growth based on our deep experience working with consumer brands.


Category: E-Commerce

 

2025-08-20 13:00:00| Fast Company

What if scientists could predict northern and southern lights like they could an eclipse? What if they could tell you where and when to be outside, within a narrow window, to see these vibrant displays? A new AI might make that possible. Today, IBM introduced Surya, an open-source foundational AI model that was developed in partnership with heliophysics scientists at NASA. Surya is like an AI telescope for the sun that can also look into the future, explained Juan Bernabe Moreno, director of IBM research in Europe, the U.K., and Ireland. Not only can Surya model what the sun looks like now, but it can also predict our stars future behavior. This is key for understanding solar flares, and whether they will produce coronal mass ejections (CMEs) and subsequent geomagnetic storms, which cause northern lights. That’s also important, as these can significantly disrupt life on Earth; a severe space weather risk scenario published by the London-based Lloyd’s insurance marketplace presented possible global economic losses of up to $9.1 trillion over a five-year period. Surya can model future active regions on the sun We are currently at or near solar maximum, which means our star is at the most active part of its 11-year cycle. This means increased sunspots, which are the source of large solar flares. These flares can subsequently trigger CMEs, whichwhen directed at Earthproduce geomagnetic storms. The increased aurora borealis (northern lights) activity over the past year has been the result of these geomagnetic storms. But these blasts of energetic particles, solar material, and magnetic fields can have negative effects as well. They disrupt communication, overload power transformers, interrupt GPS, present a threat to astronauts, and can even cause newly launched satellites to fall out of the sky. [Photo: IBM] Until now, scientists have struggled to predict solar flares. But Surya provides a visual AI model of the sun. Its a virtual telescope that can predict solar flares up to two hours before they occur, including the location, direction, and strength of the flare. Whats more, Surya provides active region emergence forecastingwhich can predict which regions of the sun will become active in the next 24 hoursand also gives a four-day lead time for the prediction of solar wind speed. Building an AI telescope Surya was trained on nine years’ worth of high-resolution images from NASAs Solar Dynamics Observatory. These are large (almost 4K resolution) images, in which every detail matters. That was a challenge. AI is lazy, Bernabe Moreno explained to Fast Company. Traditional AIif it sees many images and then sees a detail in one but in no othersit blurs the detail. But that wasnt an option with the sun, and so the team had to teach the model to include the details, rather than ignoring them. The key to Surya is that its not designed to be, say, a tool that predicts solar flares. All of these examples of what Surya can do are simply suggested use cases. It’s a foundational AI designed to model the sun in the present and future, which means the use cases for it are virtually limitless. The model is open-source and publicly available on Hugging Face for anyone to use, which the company hopes will foster scientific exploration. It has 366 million parameters; the smaller model size prioritizes performance and wide adoption. (For comparison, experts say ChatGPT-4 has as many as 1.8 trillion parameters.) IBM and NASA’s collaboration continues Theres more to come from the partnership between IBM and NASA. Surya is just one part of the IBM-NASA Prithvi foundational models, which aim to explore our planet and solar system. Prithvi uses Earth observation data to model weather and climate. NASA has identified five different science priorities, including astrophysics and planetary science, all of which eventually will have IBM-designed AI foundational models.


Category: E-Commerce

 

Latest from this category

20.08Zillow changes course on home prices: Updated forecast for 400 housing markets
20.08This is the birthplace of 85% of all major hurricanes, including Erin
20.08Beauty publishing was always a lie. But AI just broke it
20.08OpenAI gave GPT-5 an emotional lobotomy, and it crippled the model
20.08Hundreds of Claires stores arent closing anymoreheres why
20.08You might see prettier skies, thanks to new tech from NASA and IBM
20.08Target is struggling. CEO Brian Cornell is stepping down in February
20.08TGT stock price: Why are Target shares dropping todayand who is Michael Fiddelke?
E-Commerce »

All news

20.08What are Rachel Reeves' options on property tax?
20.08Sentencing for man who dodged 113 train tickets delayed
20.08Foodbank manager surprised at high demand for service
20.08Big investors ditch tech ahead of expected September stocks slump
20.08Why are food prices still going up?
20.08Why are food prices still going up?
20.08President Donald Trump thinks owning a piece of Intel would be a good deal for the US. Heres what to know
20.08Wall Street drifts in premarket trading while Target tumbles on sluggish sales and a CEO change
More »
Privacy policy . Copyright . Contact form .