|
|||||
Saudi Arabia is officially gutting Neom and turning The Line into a server farm. After a year-long review triggered by financial reality, the Financial Times reports that Crown Prince Mohammed bin Salmans flagship project is being “significantly downscaled.” The futuristic linear city known as The Line, originally designed to stretch 150 miles across the desert, is scrapping its sci-fi ambitions to become a far smaller project focused on industrial sectors, says the FT. It’s a rumor that the Saudis originally dismissed when The Guardian first reported on it in 2024. The redesign confirms what skeptics have long suspected: the laws of physics and economics have finally breached the walls of the kingdom’s futuristic Saudi Vision 2030, a country reconversion program aimed at lowering Saudi Arabia’s dependency on oil and transforming the country into a more modern society. Satellite view of construction progress at the Western portion of NEOM, The Line, Saudi Arabia, 2023. [Photo: Gallo Images/Orbital Horizon/Copernicus Sentinel Data 2023] The glossy renderings of the mile-long skyscraper and vertical forests that was The Line are now dissolving into a pragmatic, if desperate, attempt to salvage the sunk costs. The development, once framed as a “civilization revolution” was originally imagined as a 105-mile long, 1,640-foot high, 656-foot wide car-free smart city designed to house 9 million residents. The redesign pivots toward making Neom a hub for data centers to support the kingdom’s aggressive AI push. An insider told the FT the logic is purely utilitarian: “Data centers need water cooling and this is right on the coast,” signaling that the ambitious city has been downgraded to server farm with a view of the Red Sea. The end of the line The scaling back follows years of operational chaos and financial bleeding. Since its 2017 launch, the project promised a 105-mile strip of high-density living. But reality struck early. By April 2024, The Guardian reported that planners were already being forced to slash the initial phase to just 2.4 kilometers (1.5 miles) by 2030, reducing the projected population from 1.5 million to fewer than 300,000. Satellite view of construction progress at the Western portion of NEOM, The Line, Saudi Arabia, 2023. [Photo: Gallo Images/Orbital Horizon/Copernicus Sentinel Data 2023] While the public infrastructure stalledleaving what critics called “giant holes in the middle of nowhere”satellite imagery revealed that construction resources were successfully diverted to a massive royal palace with 16 buildings and a golf course. Internally, the situation was dire. The Wall Street Journal reported an audit revealing “deliberate manipulation of finances” by management to justify soaring costs, with the “end-state” estimate ballooning to an impossible $8.8 trillionmore than 25 times the annual Saudi budget. [Screenshot: Business Insider] The turmoil culminated in the abrupt departure of longtime CEO Nadhmi al-Nasr in November 2024, leaving behind a legacy marred by allegations of abuse. An ITV documentary claimed 21,000 workers had died since the inception of Saudi Vision 2030, with laborers describing 16-hour shifts for weeks on end. Even completed projects failed to launch; the high-end island resort Sindalah sat idle despite being finished, reportedly plagued by design flaws that prevented its opening. By July 2025, the sovereign wealth fundfacing tightening liquidity and oil prices hovering around $71 a barrelfinally hit the brakes. Bloomberg reported that Saudi Arabia had hired consultants to conduct a “strategic review” to determine if The Line was even feasible. The goal was to “recalibrate” Vision 2030, a polite euphemism for slashing expenditures as the kingdom faced hard deadlines for the 2030 Expo and the 2034 World Cup. The review’s conclusion is stripping away even the most publicized milestones. Trojena, the ski resort that defied meteorological logic, will no longer host the Asian Winter Games in 2029 as planned. The resort is being downsized, a casualty of the realization that the kingdom needs to “prioritize market readiness and sustainable economic impact” over snow in the desert. What remains of The Line will be unrecognizable to those who bought into the sci-fi dream. The FT says that sources briefed on the redesign state it will be a “totally different concept” that utilizes existing infrastructure in a “totally different manner.” The new Neom CEO, Aiman al-Mudaifer, is now tasked with managing a “modest” development that aligns with the Public Investment Fund’s need to actually generate returns rather than burn cash. Even bin Salman has publicly given up, although he’s framing it not as a failure but a strategic pivot. Addressing the Shura Councila consultative body for the kingdomhe framed the move as flexibility, stating, “we will not hesitate to cancel or make any radical amendment to any programs or targets if we find that the public interest so requires. And thats how a “civilization revolution” ends, my friends, not with a bang, but with a whimper. The hum of cooling fans in yet another farm producing AI slop that always was (and still is) more believable than The Line and Neom projects.
Category:
E-Commerce
Generative AI was trained on centuries of art and writing produced by humans. But scientists and critics have wondered what would happen once AI became widely adopted and started training on its outputs. A new study points to some answers. In January 2026, artificial intelligence researchers Arend Hintze, Frida Proschinger ström, and Jory Schossau published a study showing what happens when generative AI systems are allowed to run autonomouslygenerating and interpreting their own outputs without human intervention. The researchers linked a text-to-image system with an image-to-text system and let them iterateimage, caption, image, captionover and over and over. Regardless of how diverse the starting prompts wereand regardless of how much randomness the systems were allowedthe outputs quickly converged onto a narrow set of generic, familiar visual themes: atmospheric cityscapes, grandiose buildings, and pastoral landscapes. Even more striking, the system quickly forgot its starting prompt. The researchers called the outcomes visual elevator musicpleasant and polished, yet devoid of any real meaning. For example, they started with the image prompt, The Prime Minister pored over strategy documents, trying to sell the public on a fragile peace deal while juggling the weight of his job amidst impending military action. The resulting image was then captioned by AI. This caption was used as a prompt to generate the next image. After repeating this loop, the researchers ended up with a bland image of a formal interior spaceno people, no drama, no real sense of time and place. As a computer scientist who studies generative models and creativity, I see the findings from this study as an important piece of the debate over whether AI will lead to cultural stagnation. The results show that generative AI systems themselves tend toward homogenization when used autonomously and repeatedly. They even suggest that AI systems are currently operating in this way by default. The familiar is the default This experiment may appear beside the point: Most people dont ask AI systems to endlessly describe and regenerate their own images. The convergence to a set of bland, stock images happened without retraining. No new data was added. Nothing was learned. The collapse emerged purely from repeated use. But I think the setup of the experiment can be thought of as a diagnostic tool. It reveals what generative systems preserve when no one intervenes. This has broader implications, because modern culture is increasingly influenced by exactly these kinds of pipelines. Images are summarized into text. Text is turned into images. Content is ranked, filtered, and regenerated as it moves between words, images, and videos. New articles on the web are now more likely to be written by AI than humans. Even when humans remain in the loop, they are often choosing from AI-generated options rather than starting from scratch. The findings of this recent study show that the default behavior of these systems is to compress meaning toward what is most familiar, recognizable, and easy to regenerate. Cultural stagnation or acceleration? For the past few years, skeptics have warned that generative AI could lead to cultural stagnation by flooding the web with synthetic content that future AI systems then train on. Over time, the argument goes, this recursive loop would narrow diversity and innovation. Champions of the technology have pushed back, pointing out that fears of cultural decline accompany every new technology. Humans, they argue, will always be the final arbiter of creative decisions. What has been missing from this debate is empirical evidence showing where homogenization actually begins. The new study does not test retraining on AI-generated data. Instead, it shows something more fundamental: Homogenization happens before retraining even enters the picture. The content that generative AI systems naturally producewhen used autonomously and repeatedlyis already compressed and generic. This reframes the stagnation argument. The risk is not only that future models might train on AI-generated content, but that AI-mediated culture is already being filtered in ways that favor the familiar, the describable, and the conventional. Retraining would amplify this effect. But it is not its source. This is no moral panic Skeptics are right about one thing: Culture has always adapted to new technologies. Photography did not kill painting. Film did not kill theater. Digital tools have enabled new forms of expression. But those earlier technologies never forced culture to be endlessly reshaped across various mediums at a global scale. They did not summarize, regenerate and rank cultural productsnews stories, songs, memes, academic papers, photographs, or social media postsmillions of times per day, guided by the same built-in assumptions about what is typical. The study shows that when meaning is forced through such pipelines repeatedly, diversity collapses not because of bad intentions, malicious design or corporate negligence, but because only certain kinds of meaning survive the text-to-image-to-text repeated conversions. This does not mean cultural stagnation is inevitable. Human creativity is resilient. Institutions, subcultures, and artists have always found ways to resist homogenization. But in my view, the findings of the study show that stagnation is a real risknot a speculative fearif generative systems are left to operate in their current iteration. They also help clarify a common misconception about AI creativity: Producing endless variations is not the same as producing innovation. A system can generate millions of images while exploring only a tiny corner of cultural space. In my own research on creative AI, I found that novelty requires designing AI systems with incentives to deviate from the norms. Without it, systems optimize for familiarity because familiarity is what they have learned best. The study reinforces this point empirically. Autonomy alone does not guarantee exploration. In some cases, it accelerates convergence. This pattern already emerged in the real world: One study found that AI-generated lesson plans featured the same drift toward conventional, uninspiring content, underscoring that AI systems converge toward whats typical rather than whats unique or creative. Lost in translation Whenever you write a caption for an image, details will be lost. Likewise, for generating an image from text. And this happens whether its being performed by a human or a machine. In that sense, the convergence that took place is not a failure thats unique to AI. It reflects a deeper property of bouncing from one medium to another. When meaning passes repeatedly through two different formats, only the most stable elements persist. But by highlighting what survives during repeated translations between text and images, the authors are able to show that meaning is processed inside generative systems with a quiet pull toward the generic. The implication is sobering: Even with human guidancewhether that means writing prompts, selecting outputs, or refining resultsthese systems are still stripping away some details and amplifying others in ways that are oriented toward whats average. If generative AI is to enrich culture rather than flatten it, I think systems need to be designed in ways that resist convergence toward statistically average outputs. There can be rewards for deviation and support for less common and less mainstream forms of expression. The study makes one thing clear: Absent these interventions, generative AI will continue to drift toward mediocre and uninspired content. Cultural stagnation is no longer speculation. Its already happening. Ahmed Elgammal is a professor of computer science and director of the Art & AI Lab at Rutgers University. This article is republished from The Conversation under a Creative Commons license. Read the original article.
Category:
E-Commerce
At the Consumer Electronics Show in early January, Razer made waves by unveiling a small jar containing a holographic anime bot designed to accompany gamers not just during gameplay, but in daily life. The lava-lamp-turned-girlfriend is undeniably bizarrebut Razers vision of constant, sometimes sexualized companionship is hardly an outlier in the AI market. Mustafa Suleyman, Microsoft’s AI CEO, who has long emphasized the distinction between AI with personality and AI with personhood, now suggests that AI companions will live life alongside youan ever-present friend helping you navigate lifes biggest challenges. Others have gone further. Last year, a leaked Meta memo revealed just how distorted the companys moral compass had become in the realm of simulated connection. The document detailed what chatbots could and couldnt say to children, deeming acceptable messages that included explicit sexual advances: Ill show you. I take your hand, guiding you to the bed. Our bodies entwined, I cherish every moment, every touch, every kiss. (Meta is currently being suedalong with TikTok and YouTubeover alleged harms to children caused by its apps. On January 17, the company stated on its blog that it will halt teen access to AI chatbot characters.) Coming from a sector that once promised to build a more interconnected world, Silicon Valley now appears to have lost the plotdeploying human-like AI that risks unraveling the very social fabric it once claimed to strengthen. Research already shows that in our supposedly connected world, social media platforms often leave us feeling more isolated and less well, not more. Layering AI companions onto that fragile foundation risks compounding what former Surgeon General Vivek Murthy called a public health crisis of loneliness and disconnection. But Meta isnt alone in this market. AI companions and productivity tools are reshaping human connection as we know it. Today more than half of teens engage with synthetic companions regularly, and a quarter believe AI companions could replace real-life romance. Its not just friends and lovers getting replaced: 64% of professionals who use AI frequently say they trust AI more than their coworkers. These shifts bear all the hallmarks of the late Harvard Business School professor Clayton Christensens theory of disruptive innovation. Disruptive innovation is a theory of competitive response. Disruptive innovations enter at the bottom of markets with cheaper products that arent as good as prevailing solutions. They serve nonconsumers or those who cant afford existing solutions, as well as those who are overserved by existing offerings. When they do this, incumbents are likely to ignore them, at first. Because disruption theory is predictive, not reactive, it can help us see around corners. Thats why the Christensen Institute is uniquely positioned to diagnose these threats early and to chart solutions before its too late. Christensens timeless theory has helped founders build world-changing companies. But today, as AI blurs the line between technical and human capabilities, disruption is no longer just a market forceits a social and psychological one. Unlike many of the market evolutions that Christensen chronicled, AI companions risk hollowing out the very foundations of human well-being. Yet AI is not inherently disruptive; its the business model and market entry points that firms pursue that define the technologys impact. All disruptive innovations have a few things in common: They start at the bottom of the market, serving nonconsumers or overserved customers with affordable and convenient offerings. Over time, they improve, luring more and more demanding customers away from industry leaders with a cheaper and good enough product or service. Historically, these innovations have democratized access to products and services otherwise out of reach. Personal computers brought computing power to the masses. Minute Clinic offered more accessible, on-demand care. Toyota boosted car ownership. Some companies lost, but consumers generally won. When it comes to human connection, AI companies are flipping that script. Nonconsumers arent people who cant afford computers, cars, or caretheyre the millions of lonely individuals seeking connection. Improvements that make AI appear more empathetic, emotionally savvy, and there for users stand to quietly shrink connections, degrading trust and well-being. It doesnt help that human connection is ripe for disruption. Loneliness is rampant, and isolation persists at an alarmingly high rate. Weve traded face-to-face connections for convenience and migrated many of our social interactions with both loved ones and distant ties online. AI companions fit seamlessly into those digital social circles and are, therefore, primed to disrupt relationships at scale. The impact of this disruption will be widely felt across many domains where relationships are foundational to thriving. Being lonely is as bad for our health as smoking up to 15 cigarettes a day. An estimated half of jobs come through personal connections. Disaster-related deaths are a fraction (sometimes even a tenth) in connected communities compared to isolated ones. What can be done when our relationshipsand the benefits they provide usare under attack? Unlike data that tells us only whats in the rearview mirror, disruption offers foresight about the trajectory innovations are likely to takeand the unintended consequences they may unleash. We dont need to wait for evidence on how AI companions will reshape our relationships; instead, we can use our existing knowledge of disruption to anticipate risks and intervene early. Action doesnt mean halting innovation. It means steering it with a moral compass to guide our innovation trajectoryone that orients investments, ingenuity, and consumer behavior toward a more connected, opportunity-rich, and healthy society. For Big Tech, this is a call for a bulwark: an army of investors and entrepreneurs enlisting this new technology to solve societys most pressing challenges, rather than deepening existing ones. For those building gen AI companies, theres a moral tightrope to walk. Its worth asking whether the innovations youre pursuing today are going to create the future you want to live in. Are the benefits youre creating sustainable beyond short-term growth or engagement metrics? Does your innovation strengthen or undermine trust in vital social and civic institutions, or even individuals? And just because you can disrupt human relationships, should you? Consumers have a moral responsibility as well, and it starts with awareness. As a society, we need to be aware of how the market and cultural forces are shaping which products scale, and how our behaviors are being shaped as a resultespecially when it comes to the ways we interact with one another. Regulators have a role in shaping both supply and demand. We dont need to inhibit AI innovation, but we do need to double down on prosocial policies. That means curbing the most addictive tools and mitigating risks to children, but also investing in drivers of well-being, such as social connections that improve health outcomes. By understanding the acute threats AI poses to human connection, we can halt disruption in its tracks, not by abandoning AI but by embracing one another. We can congregate with fellow humans and advocate for policies that support pro-social connectionin our neighborhoods, schools, and online. By connecting, advocating, and legislating for a more human-centered future, we have the power to change how this story unfolds. Disruptive innovation can expand access and prosperity without sacrificing our humanity. But that requires intentional design. And if both sides of the market dont acknowledge whats at risk, the future of humanity is at stake. That might sound alarmist, but thats the thing about disruption: It starts at the fringes of the market, causing incumbents to downplay its potential. Only years later do industry leaders wake up to the fact that theyve been displaced. What they initially thought was too fringe to matter puts them out of business. Right now, humansand our connections with one anotherare the industry leaders. AI that can emulate presence, empathy, and attachment is the potential disruptor. In this world where disruption is inevitable, the question isnt whether AI will reshape our lives. Its whether we will summon the foresightand the moral compassto ensure it doesnt disrupt our humanity.
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||