Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2026-02-27 11:09:00| Fast Company

Recently, Grok AI faced criticism after users found it was creating explicit images of real people, including women and children. Although xAI has now implemented some restrictions, this incident revealed a serious weakness. Without safeguards and diverse perspectives, girls and women are put at greater risk. The dangers artificial intelligence poses to women and girls are real and happening now, affecting their mental health, safety, healthcare, and economic opportunities. Last fall, a mother discovered why her teenage daughter’s mental health had been deteriorating: It was a result of conversations with a Character.AI chatbot. She’s not alone. Aura’s State of Youth Report, released in December, found that parents believe technology has a more negative effect on girls’ emotions, including stress, jealousy, and loneliness51% compared with 36% for boys. Thats unacceptable, and we need to do better.  The risks extend beyond mental health. OpenAI recently reported that more than 40 million Americans seek health information on ChatGPT daily. As AI in healthcare expands, the consequences of biased training data can be dangerous. AI models that are trained predominantly on male health data produce worse outcomes for women. For instance, an AI model designed to detect liver disease from blood tests missed 44% of cases in women, compared with 23% in men. Uneven playing field In the workplace, AI is not leveling the playing field. Despite laws prohibiting discrimination, AI-powered hiring tools have repeatedly caused concerns about bias, fairness, and data privacy. A study published by the University of Washington found that in AI resume screenings, the technology favored female-associated names in only 11% of cases.  These failures reflect who is building our technology. Women make up just 22% of the AI workforce. When systems are designed without women’s perspectives, they replicate existing inequities and introduce new risks. The pattern is clear. AI is failing girls and women. Pivotal moment This could not come at a more pivotal moment in the job market. A quarter of the roles on LinkedIns latest list of the 25 fastest-growing jobs in the United States are tech-related, with AI engineers at the top. Decisions about how AI is designed today will shape access to jobs, healthcare, education, and civic life for decades. It is critical that women play an active role in developing new AI tools so that inequity is not baked into the systems that increasingly govern our lives. Young women are not disengaged with AI. Research conducted last year by Girls Who Code, in partnership with UCLA, found that young women are deeply thoughtful about the dual nature of technology. They see its potential to advance healthcare, expand educational access, and address climate change. They are also aware of its dangers, such as bias, surveillance, and exclusion from development. This isnt blind optimism. Instead, it offers a perspective that is often missing in todays AI development. Creating technology is an exercise of power and holds great responsibility. Since girls are often the most affected by AIs failures, they must be empowered to help lead the solutions. Women like Girls Who Code alumna Trisha Prabhu, who developed ReThink, an anti-bullying tool, exemplify this. Latanya Sweeney, recognized as one of the top thinkers in AI, founded Harvards Public Interest Tech Lab. Their achievements demonstrate the potential when women lead in tech development.  Smart steps If we want safer, more responsible AI systems, three steps are essential. First, computer science education should integrate social impact. Coding cannot be taught in isolation from its consequences. Students should learn technical skills alongside critical analysis of how technology shapes communities and lives. This approach produces results. For instance, one Girls Who Code student utilized the skills she learned to create an app called AIFinTech to help immigrant families manage their personal finances. Second, women must be represented in AI development and governance, particularly those from historically underserved communities. They need seats at the tables where AI systems are designed, tested, and regulated. This means ensuring gender diversity on AI ethics boards and that government AI committees are representative of the demographics most affected. Finally, how we evaluate artificial intelligence needs to evolve. Today, AI is assessed by efficiency, accuracy, and profitability. We must also evaluate health, equity, and well-being, especially for girls and young women. Before an AI system is deployed in a high-stakes environment such as healthcare, it should be required to pass tests for gender bias and demonstrate that it does not produce disparate outcomes. New York City, for example, requires employers that use automated employment decision tools to undergo an independent bias audit annually. We do not have to accept AIs flaws by default. We are witnessing AIs impact on girls in real time, and we must seize the opportunity to change course while the technology is still being shaped. When girls are given the chance to lead in AI, they will build safer systems not just for themselves, but for everyone.


Category: E-Commerce

 

LATEST NEWS

2026-02-27 11:00:00| Fast Company

What began as a race to build better AI models has escalated into a competition for compute, talent, and control. Foundation modelslarge-scale systems trained on vast datasets to generate text, images, code, and decisionsnow underpin everything from enterprise software and cloud infrastructure to national digital strategies. The industrys language around AI has grown more ambitiousand more elastic. Agentic AI has leapt from research papers to Davos billboards, while artificial general intelligence, or AGI, now appears routinely in investor decks and earnings calls. Definitions have begun to blur. Some companies quietly lower the bar for what qualifies as general, stretching the term to encompass incremental productivity gains. Yet the economic results, particularly measurable returns on AI investment, remain uneven. According to PwCs 2026 Global CEO Survey, 56% of 4,454 CEOs across 95 countries reported neither increased revenue nor reduced costs from AI over the past 12 months. Only 12% achieved both. Even so, 51% plan to continue investing, despite declining confidence in revenue growth. The result is a widening gap between engineering reality, commercial storytelling, and public expectation. Few voices carry as much authorityor have shaped modern AI as directlyas Andrew Ng. The founder of DeepLearning.AI and Coursera, executive chairman of Landing AI, and founding lead of the Google Brain team, Ng has helped define nearly every major phase of the field, from early deep-learning breakthroughs to the current wave of enterprise deployment. He has authored or coauthored more than 200 papers and previously led the Stanford AI Lab. In 2024, he popularized the term agentic AI, arguing that multistep, tool-using systems capable of executing workflows may deliver more near-term economic value than simply scaling larger models. In an exclusive conversation, Ng offered Fast Company a reality check. He says true AGIthat is, AI capable of performing the full breadth of human intellectual tasksremains decades away. The true competitive frontier, meanwhile, lies elsewhere. This conversation has been edited for length and clarity. You helped popularize the term agentic AI to describe a spectrum of autonomy in AI systems. How did you come up with it, and how has the concept evolved as multi-agent systems move into enterprise production?  I began using the term almost two and a half years ago, though I didnt publicly take credit for it at the time. I started using it because I felt the community needed language that shifted the focus toward AI systems capable of taking multiple steps of reasoning and actionnot just a single prompt-and-response exchange. More specifically, I felt there would be a spectrum of AI systemssome slightly autonomous or slightly agentic, and others highly agenticwhere they take many steps of actions and work for a long time.  No one was using the term agentic to describe this concept before I began using it. I started introducing it in my newsletter and in talks at conferences and industry events, and it quickly gained traction there. I didnt expect marketers to run with it the way they did. When I attended Davos this year, I saw the word plastered on the sides of buildings. Even outside San Francisco, agentic now appears on billboards. I did want to intentionally promote the use of the term, but seeing how common it has become, I sometimes wonder if I overdid it. Enterprise adoption of agentic AI is accelerating, yet many organizations are struggling with integration, governance, and measurable ROI. Why is it so?  Two years ago, there was intense hype around AIs risks and dangers, among other concerns. Last year, businesses began shifting their focus toward real-world implementation. This year, the conversation has moved firmly to ROI. Even though many companies are not yet seeing strong returns, they continue to invest because they understand that AI will eventually deliver value. The discussion has shifted from excitement about what AI might do to a more grounded focus on how it can generate real economic impact. Theres also an interesting split-screen dynamic emerging. On one hand, many businesses say agentic AI is not yet delivering meaningful ROI, and theyre right. At the same time, teams building agentic workflows are seeing rapid growth and real, valuable implementations. The agentic movement still has very low penetration, but it is compounding quickly. What are the most significant mistakes enterprises make when deploying agentic systems at scale, and how should leaders rethink their technology and operating models to overcome them? Many businesses are pursuing bottom-up innovation, which is valuable, but the limitation is that it often leads to point solutions that deliver incremental efficiency gains rather than transformative change. If AI automates just one step in a process, for example, it might save an hour of human work and reduce costs. Thats useful and worth doing, but it doesnt fundamentally change the business. Much of todays AI deployment falls into this categoryincremental improvement rather than full transformation. To unlock real value, companies need to look beyond optimizing individual tasks and start reimagining entire workflows. Doing so requires top-down leadership. Often no single person working on one step has the authority to reshape the entire process, which is why executive-level direction becomes essential. Real impact comes from tailoring AI strategy to each organizations specific context rather than following generic industry playbooks. There is a growing debate about whether we are in the midst of an AI bubble or simply an early infrastructure build-out comparable to the internet era. How do you distinguish between speculative hype and genuinely durable AI value being created today? At the application layer, I dont think were in an AI bubble. AI is expanding rapidly across business use caseshow we process legal and technical documents, manage customer success workflows, conduct research, and much more. I would like to see more investment in AI applications and inference infrastructure. Right now, there simply isnt enough inference capacity, and worries around rate limits exist. The more interesting question about a potential bubble sits in the model training layer, where infrastructure spending continues to surge. If any risk exists, its highest there because the largest investments are concentrated among a small number of players. When companies build highly specialized hardware that can only be reused for inference with some inefficiency, the risk of overbuilding increases. I dont think were overbuilding right now, but if any part of the AI market faces that possibility, its the training layer.  As the industry moves beyond a single-model mindset toward more diverse agentic systems, how should enterprises think about AI architecture? Is there likely to be one dominant framework for building scalable, real-world AI systemsor will organizations need a more flexible approach? Software can range from five lines o code to massive systems that run for years. Because of that range, there wont be a one-size-fits-all approach to building or governing these systems. Just as we dont use a single framework to manage everything from simple scripts to enterprise platforms, we wont rely on one architecture for agentic AI. Human work itself is incredibly diversefrom basic tasks like spell-checking to analyzing complex financial documents. Since the work varies so much, the AI systems we build will also need to vary. One principle my teams follow when building agentic AI systems is speed, as continuous improvement is essential. Our typical cycle involves building carefully to avoid major risks, testing with users, gathering feedback, and refining the system until it truly works well. That rapid loop is what helps teams build reliable, high-performing systems faster. Agentic AI is rapidly increasing systems ability to reason and act with limited human intervention. Does the rise of agentic architectures meaningfully accelerate the path toward AGI, or are we still far from true general intelligence? Most of the public thinks of AGI as AI that is as intelligent as people, and one useful definition is AI that can perform any intellectual task a human can. You and I could learn to fly an airplane with maybe 20 hours of training, learn to drive a truck through a forest, or spend a few years writing a PhD thesis. Most humans can do these things. Were still very far from AI meeting that definition of AGI. For alternative definitions that some businesses have put forwarddefinitions that dramatically lower the baryou could argue we already achieved AGI. Theres a good chance that under these lower-bar definitions, some businesses will soon try to declare success. But that wont mean AI has reached human-level intelligenceit will simply mean the definition has been reworked to fit a much lower threshold. Maybe a year ago, AGI felt 50 years away. Over the past year, perhaps weve made a solid 2% of progress, with another 49 years to go. These numbers are metaphorical, so dont take them too seriously. [Laughs] But we are closer than before, yet many decades away from an AI that matches human intelligence. If you stick with the original definitionaligned with what people genuinely imagine AGI to bewe remain very, very far away. Is geopolitical fragmentation reshaping global AI strategy for both governments and enterprises? One of the other big themes Im seeing is sovereign AI. The world is becoming more fragmented, and theres a lot of discussion about how nation-states want to make sure they have access to AI without needing to rely on other nations or any single company that they may not fully trust or be able to rely on in the long term. Governments and regions are thinking carefully about how to build and maintain their own AI capabilities so they can remain competitive and secure. As AI becomes more central to economic growth and national security, this question of who controls the infrastructure and models becomes much more important. So alongside enterprise adoption, theres also a growing geopolitical dimension to AI deployment. In 2026, as enterprises search for real economic returns from AI, what leadership decisions and workforce shifts will ultimately determine whether organizations capture meaningful value from agentic systems? Leadership matters. When I work with CEOs, I see decisive moments when the C-suite must think strategically about what to invest in and then place those bets thoughtfully, guided by a clear understanding of what the technology can and cannot donot just the surrounding hype. In periods of transformation, leadership decisions determine whether an organization captures real value from AI or merely experiments at the margins. I often speak with CEOs before they set a major strategic direction. No one knows exactly where AI will be in a few years, so we are operating in a kind of fog of war. But uncertainty does not mean we dont know anything. Teams and partners who understand the technology well can narrow that uncertainty significantly and make far more informed decisions. At the same time, everyone should learn to codeor at least learn to build software with AI. AI has lowered the barrier to creating custom tools. Today my marketers, recruiters, HR professionals, and financial analysts who use AI to write code are already more productive than those who do not. When I hire, I increasingly prefer people who know how to build with AI assistance. I may have been early on this shift, but I now see more startups and established companies moving in the same direction. Just as it became unthinkable to hire someone who could not search the web or use email, I am already at the point where I hesitate to hire knowledge workers who cannot use AI to build or automate with code.


Category: E-Commerce

 

2026-02-27 11:00:00| Fast Company

It’s sometime in the future, and Elon Musk, Jeff Bezos, and Sam Altman have joined forces on a new venture called Energym. The global chain of gyms is designed to harness the energy of the unemployed as they exercise on machines. The generated electricity feeds the AI servers that put them out of a job. Think Planet Fitness meets the Matrix, but without living in a simulation. Energyms mission is to feed the AI machines with human sweat, and it’s a great business model. By 2030, almost 80% of people have lost their jobs. If you have no money and no purpose, you may as well use all your free time to work out and feed AI server fans with some kilowatts. It solves our need for energy and your need for purpose, Altman says in a promotional video. Energym, as you probably already know, is not real. But it very well could be. In this era, where so many brands and startups are constantly trying to flip the most inane ideas into the Next Big Thing to get a $50 billion valuation and an IPO, this absurd premise makes total sense. The mockumentary-style ad fpr Energym that has been circulating on the internet captures the current AI startup circle jerk better than any I’ve seen online so far. https://www.instagram.com/reels/DVLE-QJEf0n The advertisement was created by Hans Buyse and Jan De Loore. The latterwho wrote the copy for the video, as well as edited and produced itis the cofounder of a one-man AI creative studio in Belgium called Kitchhock. The company has been creating all types of videos since 2011, back when there was no Seedance or Veo. But now, De Loore is using his creative chops and the latest generative video AI tech to make real ads for real companies in Belgium through his AI video studio arm, AiCandy. Energym is just a satirical ad designed to promote his own business and destroy the very core of those who make the technology that powers his business. (Incidentally, Energym is the same name as a company that makes a very real $2,800 static bicycle designed for exercise and to produce electricity, but its not related to AiCandy’s fake ad.) The Energym commercial is obviously tongue in cheek, as are many other videos we have seen in recent months that make fun of our increasing dependency on artificial intelligence and its power. But this one hits particularly hard. For some, it may be the Black Mirror-esque nature of it. (Theres an actual episode of the British TV series that feels like an extended version of the ad.) Personally, it connects with the WTF-ness that the current AI situation is provoking in me on different levels. The fear of whats next. The dread of seeing reality destroyed. The disgust for the fat cats that are running this charade with no checks and nobodys permission. I find it hard to pinpoint what it is. Its just an absurd exaggeration with no logical basis that hits too close for comfortand, at the same time, makes me happy.


Category: E-Commerce

 

Latest from this category

27.02Anthropic is refusing to bend on AI safeguards as dispute with Pentagon nears deadline
27.02Duolingo stock is falling off a cliff, continuing a dramatic collapse. You cant just blame that AI first memo
27.02Moltbook: The conversation we should be having
27.02Jack Dorseys fintech company Block is laying off thousands, citing gains from AI
27.02Archer Aviation and Starlink hope your first ride in an air taxi will include in-flight internet
27.02Jack Dorsey makes a grim prediction about the future of work as he lays off 4,000 Block employees in AI push
27.02Inside OpenAIs fast-growing Codex: The people building the AI that codes alongside you
27.02How Starbucks designed its new iconic cup and big comfy chair
E-Commerce »

All news

27.02Government to give go-ahead for 1bn defence helicopter deal
27.02Celebrate Pokémons 30th anniversary with this Game Boy-shaped music player
27.02Vishal Mega Mart bulk deal: Govt of Singapore, HDFC MF buy stakes as promoter sells 14% for Rs 7,636 crore
27.02Pokémon Winds and Waves are coming to Switch 2 in 2027
27.02Dell shares jump 17%, hit 3-month highs on forecast to double AI server revenue
27.02CoreWeave slumps 15% as doubling capital expenditure sparks margin concerns
27.02Engadget Podcast: Xbox's leadership shakeup and Samsung's Galaxy S26
27.02Block shares soar 16% as Jack Dorsey leans on AI to trim workforce
More »
Privacy policy . Copyright . Contact form .