Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2025-12-04 10:00:00| Fast Company

Up in the Cascade Mountains, 90 miles east of Seattle, a group of high-ranking Amazon engineers gather for a private off-site. They hail from the companys North America Stores division, and theyre here at this Hyatt resort on a crisp September morning to brainstorm new ways to power Amazons retail experiences. Passing the hotel lobbys IMAX-like mountain views, they filter into windowless meeting rooms. Down the hall, the off-sites keynote speakerByron Cook, vice president and distinguished scientist at Amazonslips into an empty conference room to have some breakfast before his presentation. Cook is 6-foot-6, but with sloping shoulders that make his otherwise imposing frame appear disarmingly concave. Hes wearing a rumpled version of his typical uniform: a thick black hoodie and loose black pants hanging slightly high at the ankles. An ashy thatch of hair points in whatever direction his hands happen to push it. Cook, 54, doesnt look much like a scientist, distinguished or otherwise, and certainly not like a VPmore like a nerdy roadie. They dont know who I am yet, he tells me between bites of breakfast, referring to the two dozen or so engineers now taking their seats. Despite his exalted title, Cook has faced plenty of rooms like this in his self-made role as a kind of missionary within Amazon, spreading the word about a powerful but obscure type of artificial intelligence called automated reasoning. As hes done many times before, Cook is here to get the highly technical people in that room to become believers. Hes championing an approach to AI that isnt powered by gigawatt data centers stuffed with GPUs, but by principles old enough to be written on papyrusand one thats already positioning Amazon as a leader in the tech industrys quest to solve the problem of hallucinations. Cook doesnt have a pretalk ritual, no need to get in character. Hes riffing half-seriously to a colleague about the pleasures of riding the New York subway in the summertime when someone mentions that the session is about to begin. He immediately drops his fork and strides out. His next batch of converts awaits. When ChatGPT hit the world with asteroid force in November 2022, Amazon was caught flat-footed just like everyone else. Not because it was an AI laggardthe tech giant had recently overhauled nearly all of its divisions, including its massive cloud-computing arm, AWS, to leverage deep learning. Amazon also dominated the smart-home market, with 300 million devices connected to Alexa, its AI-powered assistant. It had even been researching and building large language models, the tech behind ChatGPT, for multiple years, as CEO Andy Jassy told CNBC in April 2023. But OpenAIs chatbot changed the definitionand expectationsof AI overnight. Before, AI was still a mostly invisible ingredient in voice assistants, facial recognition, and other relatively narrow applications. Now it was suddenly seen as a prompt-powered genie, an infinitely flexible do-anything machine that every tech company needed to embraceor risk irrelevance. Less than six months after ChatGPTs debut, Amazon launched Bedrock, its own AWS-hosted generative AI service for enterprise clients, a list that currently includes 3M, DoorDash, Thomson Reuters, United Airlines, and the New York Stock Exchange, among others. Over the next two years, Amazon injected generative AI into product after product, from Prime Video and Amazon Music (where it powers content recommendation and discovery tools) to online retail pages (where sellers can use it to optimize their product listings), and even into internal tools used by AWSs sales teams. The company has released two chatbots (a shopping assistant called Rufus and the business-friendly Amazon Q), plus its own set of foundation models called Novathey are general-purpose AI systems, akin to Googles Gemini or OpenAIs line of GPTs. Amazon even caught the industry fever around so-called AGI (artificial general intelligence, a yet-to-be-achieved version of AI that does any cognitive task a human can) and in late 2024 launched AGI Lab, a flashy internal incubator led by David Luan, an ex-OpenAI researcher. Still, none of it captured the publics imagination like the stream of shiny objects emitted by OpenAI (reasoning models!), Anthropic (chatbots that code!), and Google (AI Overviews! Deep Research!). Like Apple, Amazon was unable to turn its early lead in AI assistants into an advantage in this new era. Alexa and Siri simply cannot compete. But maybe that has been for the best, because 2025 was the year that AIs sheen suddenly started to come off: GPT-5 fell flat, vibe coding went from killer app to major risk, and an MIT study rattled the industry by claiming that 95% of businesses get no meaningful return on their AI pilot projects. It was against this backdropthe summer AI turned ugly, as Deutsche Bank analysts called itthat Amazon publicly released Automated Reasoning Checks, a feature promising to minimize AI hallucinations and deliver up to 99% verification accuracy for generative AI applications built on AWS. The product was Cooks brainchild; in a nutshell, it snuffs out hallucinations using the same kind of computerized logic that lets mathematicians prove 300-page-long theorems. (In fact, a 1956 automated reasoning program called Logic Theorist is considered by some experts to be the worlds first AI system, finding new and shorter versions of some of the proofs in Principia Mathematica, one of the most fundamental texts in modern mathematics.) Sexy, it aint. Still, Swami Sivasubramanian, one of Amazons highest-ranking AI executives, who serves on Jassys S-team of direct advisers, was impressed enough to call Automated Reasoning Checks a new milestone in AI safety in a LinkedIn post. Matt Garman, CEO of AWS, referred to it as game-changing. [carousel_block id=”carousel-1763954270090″] Automated reasonings promise of quashing AI misbehavior with math has quietly become an essential part of Amazons strategy around agentsthose LLM-powered workbots that are supposed to transform enterprise productivity [checks watch] any day now. Apparently, businesses have serious side-eye about that, too: Earlier thi year, Gartner predicted that more than 40% of agentic AI projects will be ditched within the next two years due to inadequate risk controls. The company told me recently that it predicts that 30% to 60% of the projects that do go forward will fail due to hallucinations, risk, and lack of governance. Thats not a prophecy Amazon can afford to let come truenot with a potential market for AI agents that Gartner estimates to be worth $512 billion by 2029. One way or another, hallucinations have got to go. The question is how. Agents are just souped-up LLMs, which means they can and will go off the railsin fact, as OpenAI itself recently admitted following an internal study, they cant not. What Cook helped Amazon realize, just months after ChatGPTs release, was that they already had a secret weapon for extinguishing hallucinations, hidden in plain sight. Automated reasoning is the polar opposite of generative AI: old, stiff, and hard to use. Many at Amazon had never heard of it. But Cook knew how to wield it, having brought it to Amazon nearly 10 years ago as a way of rooting out hidden security vulnerabilities within AWS. And hed been amassing what he estimates to be the largest group of automated reasoning experts in the tech industry. Now that investment is set to pay off in a way that Amazon never expected. Automated Reasoning Checks is just the first of many products that the company plans to release (on a timetable it wont specify) that fuse the flexibility of language models with the proven reliability of automated reasoning. The latest, called Policy in Amazon Bedrock Agentcore and previewed this week at AWS’s annual Re:Invent conference, uses automated reasoning to stop agents from taking actions they’re not allowed to (such as issuing customer refunds based on fraudulent requests). If this combined approachknown as neuro-symbolic AIcan reduce the potential failure rate of agentic AI projects by even a fraction of a percent, it would be worth hundreds of millions of dollars, say analysts at Gartner. And Amazon knows it. To realize the transformative potential of AI agents and truly change the way we live and work, we need that trust, Sivasubramanian says. We believe the foundation for trustworthy, production-ready AI agents lies in automated reasoning. To understand why Amazon is banking on automated reasoning, its worth sketching out how its different from the kind of AI youve already heard of. Unlike neural networks, which learn patterns by ingesting millions or even billions of examples, automated reasoning relies on a special language called formal logic to express problems as a kind of arithmetic, based on principles that date back to ancient Greece. Computers can use this rule-based approach to calculate the answers to yes-or-no questions with mathematical certaintynot probabilistic best guesses, as deep learning does. Think of automated reasoning like TurboTax for solving complex logical problems: As long as the problems are expressed in a special language, computers can do most of the workand have been doing so for decades. Since 1994, when a flaw in Intels Pentium chips cost the company half a billion dollars to fix, nearly all microchip manufacturers have used automated reasoning to prove the correctness of designs in advance. The French government used it to verify the software for Pariss first self-driving Métro train in 1998. In 2004, NASA even used it to control the Spirit and Opportunity rovers on Mars. Theres a catch, of course: Because automated reasoning can only reduce problems to three possible outcomesyes, no, or the equivalent of does not computefinding ways to apply this logically bulletproof but incredibly rigid style of AI to the real world can be difficult and expensive. But when automated reasoning works, it really workscollapsing vast, even unknowable possibilities into a single mathematical guarantee that can compute in milliseconds on an average CPU. And Cook is very, very good at getting automated reasoning to work. Cook began his career building a formidable scientific reputation at Microsoft Research, where he spent a decade applying automated reasoning to everything from systems biology to the famously unsolvable halting problem in computer science. (Want a foolproof way to tell in advance if any computer program will run normally or get stuck in an infinite loop? Sorry, not possible. Thats the halting problem.) But by 2014, he was looking to put his findings, many of which have been published as peer-reviewed research, to work outside the lab. I was figuring out: Where is the biggest blast radius? Wheres the place I could go to foment a revolution? he says. I watched everyone moving to the cloud, and was like, I think AWS is the place to go. The first problem Amazon aimed Cook at was cloud security. Reporting directly to then chief information security officer Stephen Schmidt, Cook and his newly formed Automated Reasoning Group (ARG) painstakingly translated AWS security protocols into the language of mathematical proofs and then used their logic-based tools to surface hidden flaws. Once those flaws were corrected, those same tools could then prove with certainty that the system was secure. Some at AWS were dubious at first. When you look mad scientist up in the dictionary, Byrons picture is in the margin, says Eric Brandwine, an Amazon distinguished engineer who at the time worked on security for AWS. Early on, I challenged [him] on a lot of this stuff. But as Cooks group fleshed out plans and racked up small but significant winslike catching a vulnerability in AWSs Key Management Service, the cryptographic holy of holies that controls how clients safeguard their dataskeptics started becoming evangelists. Some of these [were] beautiful bugstheyd been there for years and never been found by our best experts, and never been found by bad guys, says James Hamilton, a legendary distinguished engineer within Amazon who now directly advises Andy Jassy. And yet, automated reasoning found them. From 2018 onward, Amazons automated reasoning experts worked with engineers to encode the technology into nearly every part of AWS, from analytics and storage to developer tools and content delivery. One particular niche of cloud-computing clientsheavily regulated financial service firms, like Goldman Sachs and the global hedge fund Bridgewater Associates, with sensitive data and strict compliance requirementsfound automated reasonings promise of provable security extremely compelling. When ChatGPT appeared and the world flung itself headfirst into generative AI, these companies did too. But they still wanted to keep the one small thing, Cook says, that theyd become accustomed to along th way: trust. That customer feedback spurred Cook to imagine how LLMs and automated reasoning might fit together. The solution that he and his collaborators prototyped in the summer of 2023 works by leveraging the same logical framework that worked so well for squishing security bugs in AWS. Step one: Take any policy meant to inform a chatbot (say, a stack of HR documentation, or zoning regulations) and translate it into formal logicthe special language of automated reasoning. Step two: Translate any responses generated by the bot too. Step three: Calculate. If theres a discrepancy between what the LLM wants to say and what the policy allows, the automated reasoning engine will catch it, flag it, and tell the bot to try again. (For humans in the loop, itll also provide logical proof of what went wrong and how, and suggest specific fixes if needed.) We showed that to senior leadership, and they went nuts for it, says Nadia Labai, a senior applied scientist at AWS who partnered with Cook on the project. The demo went on to become Automated Reasoning Checks, which Amazon previewed at its annual Re:Invent conference in December 2024. PwC, one of the Big Four global accounting and consulting firms, was among the first AWS clients to adopt it. We do a lot of work in pharmaceutical, energy, and utilities, all of which are regulated, says Matt Wood, PwCs global and U.S. commercial technology and innovation officer. PwC relies on solutions like AWSs automated reasoning tool to check the accuracy of the outputs of its generative AI toolsincluding agents. But Wood sees the technologys appeal spreading beyond finance and other regulation-heavy industries. Look at what it took to set up a website 25 years agothat was a refined set of skills. Today, you go on Squarespace, click a button, and its done, he says. My expectation is that automated reasoning will follow a similar path. Amazon will make this easier and easier: If you want an automated reasoning check on something, youll have one. Amazon has already embarked on this path with its own enterprise products and internal systems. Rufus, the AI shopping assistant, uses automated reasoning to keep its responses relevant and accurate. Warehouse robots use it to coordinate their actions in close quarters. Nova, Amazons fleet of generative AI foundation models, uses it to improve so-called chain of thought capabilities. And then there are the agents. Cook says the company has multiple agentic AI projects in development that incorporate automated reasoning, with intended applications in software development, security, and policy enforcement in AWS. One is Policy in AgentCore, which Amazon released after this story was reported. Another thats peeking out from behind the curtain is Auto, an agent built into Kiro, Amazons new AI programming tool, that will use formal logic to help make sure bot-written code matches humans intended specifications. But Sivasubramanian, AWSs vice president for agentic AI (and Cooks boss), isnt coy about the commitment Amazon is making. We believe agentic AI has the potential to be our next multibillion-dollar business, he says. As agents are granted more and more autonomy . . . automated reasoning will be key in helping them reach widespread enterprise adoption. Agents are part of why Cook is touting automated reasoning to his engineer colleagues from the North American Stores division at their off-site in the mountains. Retail might not seem to have much in common with finance or pharma, but its a domain thats full of decisions with real stakes. (While onstage at re:Invent 2025, Cook said that “giving an agent access to your credit card is like giving a teenager access to your credit card… You might end up owning a pony or a warehouse full of candy.”) And in that environment, relying on autonomous botsempowered to do anything from execute transactions to rewrite softwarecan turn hallucination from tolerable quirk into Russian roulette. Its a matter of scale: When one vibe coding VC unleashes an agent that accidentally nukes his own apps database, as happened earlier this year to SaaS investor Jason Lemkin, its a funny story. (He got the data back.) But if Fortune 500 companies start deploying swarms of agents that accidentally mislead customers, destroy records, or break industry regulations, theres no Undo button. Enterprise software is full of these potential pitfalls, and existing methods for reducing hallucination arent always strong enough to keep agents from blundering into them. Thats because agents shift the definition of hallucination itself, from errors in word to errors in deed. First of all, this thing could lie to me, explains Cook. But secondly, if I let it launch rocketshis metaphor for irreversible actionswill it launch rockets when were not supposed to? Back in his hotel room after the keynote, Cook is reviewing the contents of a confidential slide deck about how automated reasoning can solve this rocket-launching problem. The demo, which he hurriedly mentioned in his talk (he ran out of time before being able to show it), describes a system that can transform safety policies for an agentdos and donts, written in natural languageinto a flowchart-like visualization of how the agent can and cannot behave, all backed by mathematical proof. Theres even an Attempt to Fix button to use if the system detects an anomaly. Cook calls the demo a concept car, but some of its ideas made it into Policy in AgentCore, which is already available in preview to some AWS customers. PwC, for one, sees Amazons logic-backed take on AI extending into coordinating the agents themselves. If youve got agents building other agents, collaborating with other agents, managing other agents, agents all the way down, says Wood, then having a way of forcing consistency [on their behavior] is going to be really, really importantwhich is where I think automated reasoning will play a role. The ability to reliably orchestrate the actions of AInot just single agents, but entangled legions of them, at scaleis a target that Amazon has squarely in its sights. But automated reasoning may not be the only way to get the job done. EY, another Big Four firm, recently launched its own neuro-symbolic solution to AI hallucinations, EY Growth Platforms, which fuses deep learning with proprietary knowledge graphs. A startup called Kognitos offers business-friendly agents backed by a deterministic symbolic program, dubbed English as Code. Others, like PromptQL, forgo neuro-symbolic methods altogether, preferring the simulated reasoning of frontier LLMs. But even they still attack the agent hallucination problem much like Amazon does: by using generative AI to translate business processes into a special internal language thats easy to audit and control. That translation process is where Amazon built a 10-year lead with automated reasoning. Now it has to maintain it. Nadia Labai is currently working on ways to improve Amazons techniques for using LLMs to convert natural language into formal logic. Its part of a strategy that could help turn Amazons brand of customer-driven, business-friendly AI into a new class of industry- defining infrastructure. A few days before the off-site, I met with Cook in a conference room at Amazons Seattle headquarters. Sitting with his legs tucked catlike beneath him, Cook mused about his own vision for the future of automated reasoningone that extends far beyond Amazons ambitions for enterpise-grade AI. The world, he says, is filled with socio-technical systemspatchworks of often-abstruse rules that only highly paid experts can easily navigate, from civil statutes to insurance policies. Right now, rich people get [to take advantage of] that stuff, he continues. But if the rest of us had a way to manipulate these systems in natural language (thanks, LLMs) with an underlying proof of correctness (thanks, automated reasoning), a workaday kind of superintelligence could be unlocked. Not the kind that helps us colonize the galaxy, as Google DeepMind CEO Demis Hassabis envisions, but one that simply helps people navigate the complexity of everyday life, like figuring out where its legal to build housing for an aging relative or how to get an insurance company to cover their expensive medication. You could have an app that, in an hour of your own time, would get answers to questions that before would take you months, Cook says. That democratizes, if you will, access to truth. And thats the start of a new era. This story is part of Fast Companys AI 20 for 2025, our roundup spotlighting 20 of AIs most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers.


Category: E-Commerce

 

LATEST NEWS

2025-12-04 09:30:00| Fast Company

Amid an uncertain economythe growth of AI, tariffs, rising costscompanies are pulling back on hiring. As layoffs increase, the labor market cools, and unemployment ticks up, were seeing fewer people quitting their jobs. The implication: Many workers will be job hugging and sitting tight in their roles through 2026. Put more pessimistically: Employees are going to feel stuck where they are for the foreseeable future. In many cases, that means staying in unsatisfying jobs.  Gallups 2025 State of the Global Workforce report shows that employee engagement has fallen to 21%. And a March 2025 study of 1,000 U.S. workers by advisory and consulting firm Fractional Insights showed that 44% of employees reported feeling workplace angst, despite often showing intent to stay. So if these employees are hugging their current roles, its not an act of affection. Its often in desperation.  Being a job hugger means youre feeling anxious, insecure, more likely to stay but also more likely to want to leave, says Erin Eatough, chief science officer and principal adviser at Fractional Insights, which applies organizational psychology insights to the workplace. You often see a self-protective response: Nothing to see here, Im doing a good job, Im not leaving. This performative behavior can be psychologically damaging, especially in a culture of layoffs. If I was scared of losing my job Id try everything to keep it: complimenting my boss, staying late, going to optional meetings, being a good organizational citizen, says Anthony Klotz, professor of organizational behavior at the UCL School of Management in London. But we know that when people arent loving their jobs but are still going above and beyond, that its a one-way trip to burnout. The tight squeeze  In cases where jobs arent immediately under threat, the effects of hugging are more likely to be slow burning.  When an employees only motivation is to collect a consistent paycheck, discretionary effort drops. Theyre less productive. Engagement takes a huge hit. Over time, that gradually chips away at their well-being.  Humans want to feel useful, that they care about the work theyre doing, and that theyre investing their time well, Eatough says. When efforts are low, that can impact a persons sense of value. The effects stretch beyond the workplace, too. Frustrated and reluctant stayers can quickly end up in a vicious cycle, Klotz says, noting, When youre in a situation that feels like its sucking life out of you, you end up ruminating about how depleting it is, then end up so tired that you dont have energy for restorative activities outside of work. So its this downward spiralyou begin your workday even more depleted. Longer term, job hugging stunts growth. When youre looking out for yourself, rather than the team or organization, your investment in working relationships begins to break down, Eatough says. Over time, staying in that situation means youre more likely to become deeply cynical, which hurts the individual and their career trajectory. When hugging becomes clinging Feeling stuck is nothing new. At some point in their careers, most workers will be in a situation where if they could leave for a better role, they would, says Klotz, who predicted the Great Resignation.  But what distinguishes job hugging is that its anxiously clinging to a role during unfavorable labor markets. Its not that employees dont want to quitits that they cant.  Its human nature that when theres a threat of any sort that we move away from it and towards stability, Klotz says. Your job represents that stability. And currently, its not a great time to switch jobs. There are few options for job huggers. The first is speaking up and working with a manager to improve the situation. But this might be unlikely for employees who feel trapped or lack motivation in the first place. Klotz says cognitive reframing can helpfocusing purely on the positive aspects of a draining role, such as a friendly team, and tuning out the rest.  Finally, slowly backing away from extra tasksin other words, quiet quittingcould mean workers can redraw work-life boundaries in the interim at least. Otherwise, beyond Stoic philosophy or a benevolent boss, there is little choice but to wait it out.  In some cases, a job hugger may eventually turn it around, ease their grip, and become quietly content in their role. But more often, wanting to quit usually leads to actually quitting.  In effect, job hugging is damage control: hanging on until the situation changes. I think well see some people be resilient, wait it out, and find another role, Klotz says. But therell be others in the quagmire of struggling with exhaustion of spending eight hours a day in a job they dont like.


Category: E-Commerce

 

2025-12-04 09:30:00| Fast Company

The rapid expansion of artificial intelligence and cloud services has led to a massive demand for computing power. The surge has strained data infrastructure, which requires lots of electricity to operate. A single, midsize data center here on Earth can consume enough electricity to power about 16,500 homes, with even larger facilities using as much as a small city. Over the past few years, tech leaders have increasingly advocated for space-based AI infrastructure as a way to address the power requirements of data centers. In space, sunshinewhich solar panels can convert into electricityis abundant and reliable. On November 4, 2025, Google unveiled Project Suncatcher, a bold proposal to launch an 81-satellite constellation into low Earth orbit. It plans to use the constellation to harvest sunlight to power the next generation of AI data centers in space. So instead of beaming power back to Earth, the constellation would beam data back to Earth. For example, if you asked a chatbot how to bake sourdough bread, instead of firing up a data center in Virginia to craft a response, your query would be beamed up to the constellation in space, processed by chips running purely on solar energy, and the recipe sent back down to your device. Doing so would mean leaving the substantial heat generated behind in the cold vacuum of space. As a technology entrepreneur, I applaud Googles ambitious plan. But as a space scientist, I predict that the company will soon have to reckon with a growing problem: space debris. The mathematics of disaster Space debristhe collection of defunct human-made objects in Earths orbitis already affecting space agencies, companies, and astronauts. This debris includes large pieces, such as spent rocket stages and dead satellites, as well as tiny flecks of paint and other fragments from discontinued satellites. Space debris travels at hypersonic speeds of approximately 17,500 mph in low Earth orbit. At this speed, colliding with a piece of debris the size of a blueberry would feel like being hit by a falling anvil. Satellite breakups and anti-satellite tests have created an alarming amount of debris, a crisis now exacerbated by the rapid expansion of commercial constellations such as SpaceXs Starlink. The Starlink network has more than 7,500 satellites providing global high-speed internet. The U.S. Space Force actively tracks more than 40,000 objects larger than a softball using ground-based radar and optical telescopes. However, this number represents less than 1% of the lethal objects in orbit. The majority are too small for these telescopes to identify and track reliably. In November 2025, three Chinese astronauts aboard the Tiangong space station were forced to delay their return to Earth because their capsule had been struck by a piece of space debris. Back in 2018, a similar incident on the International Space Station challenged relations between the U.S. and Russia, as Russian media speculated that a NASA astronaut may have deliberately sabotaged the station. The orbital shell Googles project targetsa sun-synchronous orbit approximately 400 miles above Earthis a prime location for uninterrupted solar energy. At this orbit, the spacecrafts solar arrays will always be in direct sunshine, where they can generate electricity to power the onboard AI payload. But for this reason, sun-synchronous orbit is also the single most congested highway in low Earth orbit, and objects in this orbit are the most likely to collide with other satellites or debris. As new objects arrive and existing objects break apart, low Earth orbit could approach Kessler syndrome. In this theory, once the number of objects in low Earth orbit exceeds a critical threshold, collisions between objects generate a cascade of new debris. Eventually, this cascade of collisions could render certain orbits entirely unusable. Implications for Project Suncatcher Project Suncatcher proposes a cluster of satellites carrying large solar panels. They would fly with a radius of just 1 kilometer, each node spaced less than 200 meters apart. To put that in perspective, imagine a racetrack roughly the size of the Daytona International Speedway, where 81 cars race at 17,500 mph while separated by gaps about the distance you need to safely brake on the highway. This ultradense formation is necessary for the satellites to transmit data to each other. The constellation splits complex AI workloads across all its 81 units, enabling them to think and process data simultaneously as a single, massive, distributed brain. Google is partnering with a space company to launch two prototypesatellites by early 2027 to validate the hardware. But in the vacuum of space, flying in formation is a constant battle against physics. While the atmosphere in low Earth orbit is incredibly thin, it is not empty. Sparse air particles create orbital drag on satellites; this force pushes against the spacecraft, slowing it down and forcing it to drop in altitude. Satellites with large surface areas have more issues with drag, as they can act like a sail catching the wind. To add to this complexity, streams of particles and magnetic fields from the sunknown as space weathercan cause the density of air particles in low Earth orbit to fluctuate in unpredictable ways. These fluctuations directly affect orbital drag. When satellites are spaced less than 200 meters apart, the margin for error evaporates. A single impact could not only destroy one satellite but also send it blasting into its neighbors, triggering a cascade that could wipe out the entire cluster and randomly scatter millions of new pieces of debris into an orbit that is already a minefield. The importance of active avoidance To prevent crashes and cascades, satellite companies could adopt a leave no trace standard, which means designing satellites that do not fragment, release debris, or endanger their neighbors, and that can be safely removed from orbit. For a constellation as dense and intricate as Suncatcher, meeting this standard might require equipping the satellites with reflexes that autonomously detect and dance through a debris field. Suncatchers current design doesnt include these active avoidance capabilities. In the first six months of 2025 alone, SpaceXs Starlink constellation performed a staggering 144,404 collision-avoidance maneuvers to dodge debris and other spacecraft. Similarly, Suncatcher would likely encounter debris larger than a grain of sand every five seconds. Todays object-tracking infrastructure is generally limited to debris larger than a softball, leaving millions of smaller debris pieces effectively invisible to satellite operators. Future constellations will need an onboard detection system that can actively spot these smaller threats and maneuver the satellite autonomously in real time. Equipping Suncatcher with active collision-avoidance capabilities would be an engineering feat. Because of the tight spacing, the constellation would need to respond as a single entity. Satellites would need to reposition in concert, similar to a synchronized flock of birds. Each satellite would need to react to the slightest shift of its neighbor. Paying rent for the orbit Technological solutions, however, can go only so far. In September 2022, the Federal Communications Commission created a rule requiring satellite operators to remove their spacecraft from orbit within five years of the missions completion. This typically involves a controlled de-orbit maneuver. Operators must now reserve enough fuel to fire the thrusters at the end of the mission to lower the satellites altitude, until atmospheric drag takes over and the spacecraft burns up in the atmosphere. However, the rule does not address the debris already in space, nor any future debris, from accidents or mishaps. To tackle these issues, some policymakers have proposed a use tax for space debris removal. A use tax or orbital-use fee would charge satellite operators a levy based on the orbital stress their constellation imposes, much like larger or heavier vehicles paying greater fees to use public roads. These funds would finance active debris-removal missions, which capture and remove the most dangerous pieces of junk. Avoiding collisions is a temporary technical fix, not a long-term solution to the space debris problem. As some companies look to space as a new home for data centers, and others continue to send satellite constellations into orbit, new policies and active debris-removal programs can help keep low Earth orbit open for business. Mojtaba Akhavan-Tafti is an associate research scientist at the University of Michigan. This article is republished from The Conversation under a Creative Commons license. Read the original article.


Category: E-Commerce

 

Latest from this category

04.12Why dynamic pricing is becoming the rule, not the exception
04.12Visas next World Cup move: Soccer-themed art
04.12The Fast Company AI 20 for 2025
04.12Amazon takes on AIs biggest nightmare: Hallucinations
04.12Figma wants to make working with AI more like working with humans
04.12The Browser Companys Tara Feener is advancing search for the AI era
04.12Nvidias Kimberly Powell is applying AI to expedite drug discovery
04.12Heres how Waabi teaches self-driving trucks to navigate safely
E-Commerce »

All news

04.12Visas next World Cup move: Soccer-themed art
04.12Why dynamic pricing is becoming the rule, not the exception
04.12How Google creates the Year in Search
04.12Why the Browser Company thinks Dia is the best layer for AI
04.12Nvidias AI healthcare vision spans new drugs, robots, and beyond
04.12OpenAIs Michelle Pokrass is focused on ChatGPT power users
04.12Justine and Olivia Moore are driving a16zs investment in cutting-edge AI
04.12Anthropics Kyle Fish is exploring whether AI is conscious
More »
Privacy policy . Copyright . Contact form .