Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2026-02-27 11:30:00| Fast Company

At hundreds of Burger King restaurants across the U.S., theres a new invisible worker whos tracking which ingredients are in stock, analyzing daily sales data, and checking in on whether employees are saying Thank you and Youre welcome. Its an AI assistant named Patty.  According to Thibault Roux, Burger Kings chief digital officer, the voice-activated chatbot is designed to help employees and managers handle tasks that might usually require pulling out a computer or consulting with an instruction guide. Patty began showing up at select locations about a year ago, and is now in a pilot phase at approximately 500 Burger Kings. Its expected to roll out to the rest of the chains U.S. locations by the end of the year. On a day-to-day basis, Patty has an array of functions, from letting a manager know if a store is low on onions to helping an employee build a new burger. But it has another role thats raising quite a few eyebrows: analyzing Burger King locations based on friendliness by tracking employees use of key phrases like Welcome to Burger King, Please, and Thank you. Online, commenters are concerned that this functionality is a slippery slope toward 1984-style employee surveillance. In an interview with Fast Company, though, Roux clarified that Patty is not being used to analyze individual employees performance, and is instead imagined as a kind of coach. It’s truly meant to be a coaching and operational tool to really help our restaurants manage complexities and stay focused on a great guest experience, Roux says. Guests want our service to be more friendly, and that’s ultimately what we’re trying to achieve here. Patty, are we running low on Diet Coke? Technically, Patty is the chatbot version of Burger Kings assistant platform, which collects data from operations including drive-through conversations, inventory, and sales, and then uses AI to analyze patterns in that data. For now, Patty operates on a customized model from OpenAI, though Roux says the technology is flexible enough that it could integrate with another partner in the future (like Anthropic or Gemini) depending on the companys needs. For managers and employees in stores, Roux says Patty operates similarly to something like Siri. Patty is activated by a small button on the side of an employees headset, and they can ask it direct verbal questions related to their specific storelike recent sales figures or inventory updatesas well as more general company information, to which the bot will provide a verbal answer. If you’re looking to clean the shake machine [you can ask Patty] the procedures to clean it, Roux explains. Or we have a lot of limited-time offers, and sometimes they can be cumbersome to remember. You can easily tap into Patty and be like, Hey, remind me, does the new build maple bourbon barbecue have crispy jalapeos? Patty can also reach out to employees directly if it notices a pattern of interest. For example, if Patty thinks a specific store is out of lettuce, it might ping a manager to confirm. Once its received confirmation, it can mark lettuce as sold out on that locations app and websitea process that previously would have required human intervention. Roux says franchisees and regional managers can decide how they want Patty to reach employees with information, whether its through their headsets or via a text message (though the tech is programmed explicitly to never interrupt a worker during a customer interaction).  Insights from Burger Kings Assistant platform also live outside of employees headsets. Managers can check information from the tool on an accompanying website or app. For example, Roux says, when a district manager is visiting a new store, they might ask Patty on the app, What are the top three guest complaints at this location this week? or What are their top missing items?  In an interview with Fast Company writer Jeff Beer earlier this month, Burger King President Tom Curtis said the assistant platform has already led to some significant menu changes. Curtis explained that the AI tracked all the times that team members said Im sorry, we dont have that and linked them back to a common denominator: apple pie. In January, Burger King brought back its apple pie for the first time since 2020. Were in the idiocracy version of 1984 Pattys more straightforward uses, like helping managers access sales data and check inventory, seem fairly predictable in the context of fast food. Where Burger King is really pushing Pattys use cases, though, is with its friendliness metric.  In an interview with The Verge on February 26, Roux said Patty would recognize phrases like Welcome to Burger King, Please, and Thank you, and then give managers access to data on their locations friendliness performance based on those keywords. Mere hours after that piece went live, a thread in the subReddit r/technology on Patty had already amassed more than 15,000 upvotes and nearly 3,000 comments. Common refrains from users include comparing the technology to the surveillance state in George Orwells novel 1984, labeling it authoritarian and dystopian, and accusing Burger King of employee surveillance.  “This would be criticized as being cartoonishly unrealistic in a sci-fi movie 10 years ago,” one user wrote. Another added, “We’re in the idiocracy version of 1984.” When asked about this response, Roux says the data from employees conversations is anonymized, and that none of these friendliness metrics will be used for grading or assessing individuals. Further, he adds, Patty will not directly instruct employees on what to say or how to say it. Instead, data on friendliness will be shared with managers, who can use it for face-to-face coaching with their teams.  Still, its unclear exactly how Patty is quantifying friendliness. In a video explanation of the feature, a manager is shown asking the bot, Is there anything that needs my immediate attention? to which it responds, The teams friendliness scores this morning were the highest this week. In an email to Fast Company, a Burger King spokesperson said, In select pilot locations, weve explored using aggregated keywords, including common hospitality phrases, as one of several signals to help managers understand overall service patterns. The tool is not used to score individuals or enforce scripts. Burger King did not respond to ast Companys request for clarification on how friendliness scores are calculated. So far, Roux says hes seen growing interest in Patty from franchisees, with several managers making specific requests for future add-ons.  A lot of our franchisees . . . and regional general managers are very competitive, so they want to know, Hey, how do I compare to other restaurants? Roux says. I think that’s something that we’re going to be rolling out. In fact, we were looking at some of the designs earlier this week with the franchisees. So this is only the beginning.


Category: E-Commerce

 

LATEST NEWS

2026-02-27 11:09:00| Fast Company

Recently, Grok AI faced criticism after users found it was creating explicit images of real people, including women and children. Although xAI has now implemented some restrictions, this incident revealed a serious weakness. Without safeguards and diverse perspectives, girls and women are put at greater risk. The dangers artificial intelligence poses to women and girls are real and happening now, affecting their mental health, safety, healthcare, and economic opportunities. Last fall, a mother discovered why her teenage daughter’s mental health had been deteriorating: It was a result of conversations with a Character.AI chatbot. She’s not alone. Aura’s State of Youth Report, released in December, found that parents believe technology has a more negative effect on girls’ emotions, including stress, jealousy, and loneliness51% compared with 36% for boys. Thats unacceptable, and we need to do better.  The risks extend beyond mental health. OpenAI recently reported that more than 40 million Americans seek health information on ChatGPT daily. As AI in healthcare expands, the consequences of biased training data can be dangerous. AI models that are trained predominantly on male health data produce worse outcomes for women. For instance, an AI model designed to detect liver disease from blood tests missed 44% of cases in women, compared with 23% in men. Uneven playing field In the workplace, AI is not leveling the playing field. Despite laws prohibiting discrimination, AI-powered hiring tools have repeatedly caused concerns about bias, fairness, and data privacy. A study published by the University of Washington found that in AI resume screenings, the technology favored female-associated names in only 11% of cases.  These failures reflect who is building our technology. Women make up just 22% of the AI workforce. When systems are designed without women’s perspectives, they replicate existing inequities and introduce new risks. The pattern is clear. AI is failing girls and women. Pivotal moment This could not come at a more pivotal moment in the job market. A quarter of the roles on LinkedIns latest list of the 25 fastest-growing jobs in the United States are tech-related, with AI engineers at the top. Decisions about how AI is designed today will shape access to jobs, healthcare, education, and civic life for decades. It is critical that women play an active role in developing new AI tools so that inequity is not baked into the systems that increasingly govern our lives. Young women are not disengaged with AI. Research conducted last year by Girls Who Code, in partnership with UCLA, found that young women are deeply thoughtful about the dual nature of technology. They see its potential to advance healthcare, expand educational access, and address climate change. They are also aware of its dangers, such as bias, surveillance, and exclusion from development. This isnt blind optimism. Instead, it offers a perspective that is often missing in todays AI development. Creating technology is an exercise of power and holds great responsibility. Since girls are often the most affected by AIs failures, they must be empowered to help lead the solutions. Women like Girls Who Code alumna Trisha Prabhu, who developed ReThink, an anti-bullying tool, exemplify this. Latanya Sweeney, recognized as one of the top thinkers in AI, founded Harvards Public Interest Tech Lab. Their achievements demonstrate the potential when women lead in tech development.  Smart steps If we want safer, more responsible AI systems, three steps are essential. First, computer science education should integrate social impact. Coding cannot be taught in isolation from its consequences. Students should learn technical skills alongside critical analysis of how technology shapes communities and lives. This approach produces results. For instance, one Girls Who Code student utilized the skills she learned to create an app called AIFinTech to help immigrant families manage their personal finances. Second, women must be represented in AI development and governance, particularly those from historically underserved communities. They need seats at the tables where AI systems are designed, tested, and regulated. This means ensuring gender diversity on AI ethics boards and that government AI committees are representative of the demographics most affected. Finally, how we evaluate artificial intelligence needs to evolve. Today, AI is assessed by efficiency, accuracy, and profitability. We must also evaluate health, equity, and well-being, especially for girls and young women. Before an AI system is deployed in a high-stakes environment such as healthcare, it should be required to pass tests for gender bias and demonstrate that it does not produce disparate outcomes. New York City, for example, requires employers that use automated employment decision tools to undergo an independent bias audit annually. We do not have to accept AIs flaws by default. We are witnessing AIs impact on girls in real time, and we must seize the opportunity to change course while the technology is still being shaped. When girls are given the chance to lead in AI, they will build safer systems not just for themselves, but for everyone.


Category: E-Commerce

 

2026-02-27 11:00:00| Fast Company

What began as a race to build better AI models has escalated into a competition for compute, talent, and control. Foundation modelslarge-scale systems trained on vast datasets to generate text, images, code, and decisionsnow underpin everything from enterprise software and cloud infrastructure to national digital strategies. The industrys language around AI has grown more ambitiousand more elastic. Agentic AI has leapt from research papers to Davos billboards, while artificial general intelligence, or AGI, now appears routinely in investor decks and earnings calls. Definitions have begun to blur. Some companies quietly lower the bar for what qualifies as general, stretching the term to encompass incremental productivity gains. Yet the economic results, particularly measurable returns on AI investment, remain uneven. According to PwCs 2026 Global CEO Survey, 56% of 4,454 CEOs across 95 countries reported neither increased revenue nor reduced costs from AI over the past 12 months. Only 12% achieved both. Even so, 51% plan to continue investing, despite declining confidence in revenue growth. The result is a widening gap between engineering reality, commercial storytelling, and public expectation. Few voices carry as much authorityor have shaped modern AI as directlyas Andrew Ng. The founder of DeepLearning.AI and Coursera, executive chairman of Landing AI, and founding lead of the Google Brain team, Ng has helped define nearly every major phase of the field, from early deep-learning breakthroughs to the current wave of enterprise deployment. He has authored or coauthored more than 200 papers and previously led the Stanford AI Lab. In 2024, he popularized the term agentic AI, arguing that multistep, tool-using systems capable of executing workflows may deliver more near-term economic value than simply scaling larger models. In an exclusive conversation, Ng offered Fast Company a reality check. He says true AGIthat is, AI capable of performing the full breadth of human intellectual tasksremains decades away. The true competitive frontier, meanwhile, lies elsewhere. This conversation has been edited for length and clarity. You helped popularize the term agentic AI to describe a spectrum of autonomy in AI systems. How did you come up with it, and how has the concept evolved as multi-agent systems move into enterprise production?  I began using the term almost two and a half years ago, though I didnt publicly take credit for it at the time. I started using it because I felt the community needed language that shifted the focus toward AI systems capable of taking multiple steps of reasoning and actionnot just a single prompt-and-response exchange. More specifically, I felt there would be a spectrum of AI systemssome slightly autonomous or slightly agentic, and others highly agenticwhere they take many steps of actions and work for a long time.  No one was using the term agentic to describe this concept before I began using it. I started introducing it in my newsletter and in talks at conferences and industry events, and it quickly gained traction there. I didnt expect marketers to run with it the way they did. When I attended Davos this year, I saw the word plastered on the sides of buildings. Even outside San Francisco, agentic now appears on billboards. I did want to intentionally promote the use of the term, but seeing how common it has become, I sometimes wonder if I overdid it. Enterprise adoption of agentic AI is accelerating, yet many organizations are struggling with integration, governance, and measurable ROI. Why is it so?  Two years ago, there was intense hype around AIs risks and dangers, among other concerns. Last year, businesses began shifting their focus toward real-world implementation. This year, the conversation has moved firmly to ROI. Even though many companies are not yet seeing strong returns, they continue to invest because they understand that AI will eventually deliver value. The discussion has shifted from excitement about what AI might do to a more grounded focus on how it can generate real economic impact. Theres also an interesting split-screen dynamic emerging. On one hand, many businesses say agentic AI is not yet delivering meaningful ROI, and theyre right. At the same time, teams building agentic workflows are seeing rapid growth and real, valuable implementations. The agentic movement still has very low penetration, but it is compounding quickly. What are the most significant mistakes enterprises make when deploying agentic systems at scale, and how should leaders rethink their technology and operating models to overcome them? Many businesses are pursuing bottom-up innovation, which is valuable, but the limitation is that it often leads to point solutions that deliver incremental efficiency gains rather than transformative change. If AI automates just one step in a process, for example, it might save an hour of human work and reduce costs. Thats useful and worth doing, but it doesnt fundamentally change the business. Much of todays AI deployment falls into this categoryincremental improvement rather than full transformation. To unlock real value, companies need to look beyond optimizing individual tasks and start reimagining entire workflows. Doing so requires top-down leadership. Often no single person working on one step has the authority to reshape the entire process, which is why executive-level direction becomes essential. Real impact comes from tailoring AI strategy to each organizations specific context rather than following generic industry playbooks. There is a growing debate about whether we are in the midst of an AI bubble or simply an early infrastructure build-out comparable to the internet era. How do you distinguish between speculative hype and genuinely durable AI value being created today? At the application layer, I dont think were in an AI bubble. AI is expanding rapidly across business use caseshow we process legal and technical documents, manage customer success workflows, conduct research, and much more. I would like to see more investment in AI applications and inference infrastructure. Right now, there simply isnt enough inference capacity, and worries around rate limits exist. The more interesting question about a potential bubble sits in the model training layer, where infrastructure spending continues to surge. If any risk exists, its highest there because the largest investments are concentrated among a small number of players. When companies build highly specialized hardware that can only be reused for inference with some inefficiency, the risk of overbuilding increases. I dont think were overbuilding right now, but if any part of the AI market faces that possibility, its the training layer.  As the industry moves beyond a single-model mindset toward more diverse agentic systems, how should enterprises think about AI architecture? Is there likely to be one dominant framework for building scalable, real-world AI systemsor will organizations need a more flexible approach? Software can range from five lines o code to massive systems that run for years. Because of that range, there wont be a one-size-fits-all approach to building or governing these systems. Just as we dont use a single framework to manage everything from simple scripts to enterprise platforms, we wont rely on one architecture for agentic AI. Human work itself is incredibly diversefrom basic tasks like spell-checking to analyzing complex financial documents. Since the work varies so much, the AI systems we build will also need to vary. One principle my teams follow when building agentic AI systems is speed, as continuous improvement is essential. Our typical cycle involves building carefully to avoid major risks, testing with users, gathering feedback, and refining the system until it truly works well. That rapid loop is what helps teams build reliable, high-performing systems faster. Agentic AI is rapidly increasing systems ability to reason and act with limited human intervention. Does the rise of agentic architectures meaningfully accelerate the path toward AGI, or are we still far from true general intelligence? Most of the public thinks of AGI as AI that is as intelligent as people, and one useful definition is AI that can perform any intellectual task a human can. You and I could learn to fly an airplane with maybe 20 hours of training, learn to drive a truck through a forest, or spend a few years writing a PhD thesis. Most humans can do these things. Were still very far from AI meeting that definition of AGI. For alternative definitions that some businesses have put forwarddefinitions that dramatically lower the baryou could argue we already achieved AGI. Theres a good chance that under these lower-bar definitions, some businesses will soon try to declare success. But that wont mean AI has reached human-level intelligenceit will simply mean the definition has been reworked to fit a much lower threshold. Maybe a year ago, AGI felt 50 years away. Over the past year, perhaps weve made a solid 2% of progress, with another 49 years to go. These numbers are metaphorical, so dont take them too seriously. [Laughs] But we are closer than before, yet many decades away from an AI that matches human intelligence. If you stick with the original definitionaligned with what people genuinely imagine AGI to bewe remain very, very far away. Is geopolitical fragmentation reshaping global AI strategy for both governments and enterprises? One of the other big themes Im seeing is sovereign AI. The world is becoming more fragmented, and theres a lot of discussion about how nation-states want to make sure they have access to AI without needing to rely on other nations or any single company that they may not fully trust or be able to rely on in the long term. Governments and regions are thinking carefully about how to build and maintain their own AI capabilities so they can remain competitive and secure. As AI becomes more central to economic growth and national security, this question of who controls the infrastructure and models becomes much more important. So alongside enterprise adoption, theres also a growing geopolitical dimension to AI deployment. In 2026, as enterprises search for real economic returns from AI, what leadership decisions and workforce shifts will ultimately determine whether organizations capture meaningful value from agentic systems? Leadership matters. When I work with CEOs, I see decisive moments when the C-suite must think strategically about what to invest in and then place those bets thoughtfully, guided by a clear understanding of what the technology can and cannot donot just the surrounding hype. In periods of transformation, leadership decisions determine whether an organization captures real value from AI or merely experiments at the margins. I often speak with CEOs before they set a major strategic direction. No one knows exactly where AI will be in a few years, so we are operating in a kind of fog of war. But uncertainty does not mean we dont know anything. Teams and partners who understand the technology well can narrow that uncertainty significantly and make far more informed decisions. At the same time, everyone should learn to codeor at least learn to build software with AI. AI has lowered the barrier to creating custom tools. Today my marketers, recruiters, HR professionals, and financial analysts who use AI to write code are already more productive than those who do not. When I hire, I increasingly prefer people who know how to build with AI assistance. I may have been early on this shift, but I now see more startups and established companies moving in the same direction. Just as it became unthinkable to hire someone who could not search the web or use email, I am already at the point where I hesitate to hire knowledge workers who cannot use AI to build or automate with code.


Category: E-Commerce

 

Latest from this category

27.02Anthropic is refusing to bend on AI safeguards as dispute with Pentagon nears deadline
27.02Duolingo stock is falling off a cliff, continuing a dramatic collapse. You cant just blame that AI first memo
27.02Moltbook: The conversation we should be having
27.02Jack Dorseys fintech company Block is laying off thousands, citing gains from AI
27.02Archer Aviation and Starlink hope your first ride in an air taxi will include in-flight internet
27.02Jack Dorsey makes a grim prediction about the future of work as he lays off 4,000 Block employees in AI push
27.02Inside OpenAIs fast-growing Codex: The people building the AI that codes alongside you
27.02How Starbucks designed its new iconic cup and big comfy chair
E-Commerce »

All news

27.02Government to give go-ahead for 1bn defence helicopter deal
27.02Celebrate Pokémons 30th anniversary with this Game Boy-shaped music player
27.02Vishal Mega Mart bulk deal: Govt of Singapore, HDFC MF buy stakes as promoter sells 14% for Rs 7,636 crore
27.02Pokémon Winds and Waves are coming to Switch 2 in 2027
27.02Dell shares jump 17%, hit 3-month highs on forecast to double AI server revenue
27.02CoreWeave slumps 15% as doubling capital expenditure sparks margin concerns
27.02Engadget Podcast: Xbox's leadership shakeup and Samsung's Galaxy S26
27.02Block shares soar 16% as Jack Dorsey leans on AI to trim workforce
More »
Privacy policy . Copyright . Contact form .