|
|||||
San Diego-based Shield AI is developing a first of its kind fighter jet: a 2,000-mile-range pilotless plane that takes off and lands vertically and uses artificial intelligence to fly itself, even when adversaries jam navigation and communication systems. Like the company’s smaller, combat-tested autonomous drone, the V-BAT, the X-BAT doesn’t need a runway, allowing it to launch from remote islands or the decks of aircraft carriers or drone ships. But with its larger blended wing body design, the X-BAT can carry missiles and electronic weapons. Instead of propellers, it’s powered by an afterburning jet engine. Airpower without runways is the holy grail of deterrence, said Brandon Tseng, Shield’s cofounder and president. The aircraft could join a new class of AI-piloted fighter jets being developed for the Pentagon and other defense agencies, where the aim is to deploy robotic wingmen alongside human pilots or as part of separate drone squadrons. Taking their cue from the fierce drone war in Ukraine, military officials around the globe are eyeing layers of cheaper, more disposable AI-powered drones on air, land, and sea, with a single soldier responsible for an entire swarm. A separate race is on to field counter-drone systems. Investors have followed suit, pouring cash into a range of defense-tech firms, including Shield rivals like Anduril, Helsing, and publicly traded AeroVironment, or AV. Globally, venture capital investment in defense companies surged to $31 billion last year, a 33% increase over the previous year, according to McKinsey. Anduril, founded in 2017, is the largest of the so-called “neo prime” contractors, with a valuation at around $30 billion. Shield, founded in 2015 and valued at $5.3 billion, is the next biggest defense startup. (Fast Company named it a Most Innovative Company in 2020.) The X-BAT’s tail-less blended wing body aims for added lift [Image: Shield AI] With the X-BAT, Shield joins a number of startups and legacy contractors developing AI aircraft that can match the capabilities of an F-16 but in a smaller form factor. Last month, Shield was picked by the Air Force to provide the AI software for the YFQ-44, Anduril’s entrant in the service’s Collaborative Combat Aircraft (CCA) competition. Defense giant RTX was selected to build the software for the other drone prototype being considered by the Air Force, the YFQ-42, built by General Atomics. Both drones are roughly the same size as the 26-foot-long X-BAT, which is about a third as large as a conventional fighter jet. The service has said it plans to choose a design for production by fiscal year 2026, and has indicated it may select multiple companies. Last month, Breaking Defense reported that the Navy had selected another set of drone designs by Anduril, Northrop Grumman, Boeing, and General Atomics for its own collaborative combat aircraft competition. The Army and the Marine Corps are also making plans for their own “loyal wingmen.” Tseng declined to comment on Shield’s role in the current CCA program, or on the potential of X-BAT to enter a future competition. Still, he said the aircraft’s price, at around $27 million, is in the same range as the Collaborative Combat Aircraft programs, with variations based on mission systems and configurations. That translates to about a tenth of the cost per effect compared to legacy fifth-generation aircraft, Tseng said. X-BAT also represents a major development for Shield AIs business, the company said in a press release. We believe the greatest victory requires no war, said Tseng. To make that belief real, were executing a simple but ambitious master plan: Prove the value of autonomy, scale it across domains, and reimagine airpower. X-BAT represents the next part of that plan. The X-PAD on its launch pedestal [Photo: Shield AI] Up and down Shield AIs plans have taken several turns in recent years. Founded in 2015 by brothers Brandon and Ryan Tseng and Andrew Reiter, the 1000-person company has sold hundreds of its V-BAT drones. But after it landed a $240 million funding round in March, Brandon told Bloomberg that the company would place more of an emphasis on its AI software, which had been a larger focus before it bought longtime V-BAT maker Martin UAV in 2021. The company has generated billions of dollars in revenue, and had planned to reach profitability by 2025. But as Forbes reported in May, those projections were scrapped after a service member had his fingers partially severed during a V-BAT landing in 2023. Shortly before Forbes published its story, Ryan Tseng stepped down as CEO, and Gary Steele, a Cisco executive, took the helm. (Ryan became chief strategic officer and remains on the board of directors.) Shield AI founders Brandon and Ryan Tseng [Photo: Shield AI] Company officials have said they have taken a number of steps to address safety concerns, including adding unassisted launch and land capabilities to the V-BAT. The service member has since fully recovered. Today, V-BAT retains a perfect record of no injuries when following trained procedures, Tseng told Forbes. While the accident “delayed” the decisions of prospective customers, “we are back on track now,” he said. A human still in the loop In recent months, Shield’s software division has entered partnerships with legacy defense contractors including RTX, Airbus and shipbuilder HII to incoporate AI into their vehicles and weapons systems. Shield’s Hivemind softwarewhich grew out of Chief Technology Officer Nathan Michael’s research at Carnegie Mellon University, as well as AI startup Heron Systems, which Shield acquired in 2021can help pilot vehicles ranging from attack drones to F-16s, helicopters and boats, operating individually or in swarms. Shield conducts wind tunnel tests on an X-BAT mock-up [Photo: Shield AI] Last year, Air Force Secretary Frank Kendall flew in a test fighter piloted by Shield AI’s algorithms, taking on a manned F-16 in a simulated dogfight over the California desert. Shield AI aspires to service the autonomous needs for the defense sector, like Palantir services its intelligence needs,” Brandon Tseng told Bloomberg in March. (Shield also announced a partnership with Palantir last year, under which the firms would use each others software.) Shield AI also continues to develop the V-BAT. The 9-foot-long drone, which can fly more than 80 miles and stay aloft for more than 13 hours, carrying a payload of about 40 pounds, has been used by Ukraine, Israel, and other countries to carry out reconnaissance and targeting operations. U.S. special forces have deployed the V-BAT, and the Coast Guard, which has a five-year contract with Shield AI worth $200 million, has used the drone with “joint forces” to interdict billions of dollars worth of narcotics, Tseng said. Shield engineers with a V-BAT [Photo: Shield AI] Through over 150 V-BAT deployments in Ukraine, the AI software has also been put through its paces in places where GPS and other communications aren’t available. “Youre telling the aircraft, hey, this is your zone of operations, we want you to do X-Y-Z in this area,” Armor Harris, Shield’s senior vice president of aircraft, told The War Zone. And “given its last set of instructions and the rules of engagement for what its allowed to do, what its not allowed to do, itll go and itll continue its mission autonomously when those comms links are not there.” That capability is “where the system is more advanced than anything else in the world to date.” Still, not everything is autonomousyet. “Fundamentally [at] Shield AI, we believe that a human should be on-the-loop for an offensive kill decision,” Harris said. All-out drone push The Collaborative Combat Aircraft programs are part of an all-out push at the Pentagon (and in Silicon Valley) for AI and drones. In June 2025, President Donald Trump issued an executive order called Unleashing American Drone Dominance that aims to accelerate commercialization of drone technologies, and the administrations budget request has allocated billions of dollars to unmanned systems and AI. That includes an effort to onshore the supply chain for drones and the electronics and minerals they require, which is currently dominated by China. Shield AI has some experience here: In March, the Chinese Ministry of Commerce placed 15 U.S. entities, including Shield AI, on its export control list, barring them from the export of dual-use commodities. [Photo: Shield AI] Shield is working with Pratt & Whitney and GE to develop the X-BAT’s jet engine. Company officials declined to share further details on the engine, or how it handles takeoff and landing. But Tseng pointed to subsystems built with proven U.S. partners to ensure performance, reliability, and resilient American supply chains. Tseng said the company expects to conduct initial vertical takeoff and landing demonstrations for X-BAT as early as fall 2026, followed by all-up flight testing and operational validation in 2028. As for possible X-BAT manufacturing sites, Tseng said, we are in discussions with several states and their leadership.” And if some defense agency eventually places an order, he added, “the selected location will create thousands of jobs and generate billions in economic value.”
Category:
E-Commerce
From July 14 to November 9, 2023, the American actors’ union SAG-AFTRA, representing 160,000 people, went on strike over a labor dispute with the Alliance of Motion Picture and Television Producers. Eventually, both sides agreed to terms that theoretically would put limits on how actors images and output could be used. Strike over, everybody went back to work and the entertainment industrial complex started humming again. But they apparently never took heed of the lessons offered by a somewhat obscure 2013 movie, The Congress, which eerily anticipated the crisis Hollywood is now facing. Caught by surprise? Really? Fast-forward to September of 2025. Dutch actor and comedian Eline Van der Veldens company Particle6 released an AI “actor” named Tilly Norwood with the express intention of her becoming the next Scarlett Johansson. The bot had its own social media presence, appeared in comedy sketches, and breathlessly declared, “I may be AI, but I’m feeling very real emotions right now. I am so excited for what’s coming next!” The news that there were agents in talks to sign Norwood, the way they might sign a real actor, sparked an incredible Hollywood firestorm. Lots of denunciations of this use of technology. Lots of claims that this was unfair. And lots and lots of workplace anxiety. But should they really have been this surprised? Futurist Amy Webb suggests not. As she says, Lets not kid ourselves: theyve had more than a decade to prepare for this. Toy Story, launched in 1995, was the first full-length feature film to be fully animated, followed by a string of other hits that did very well without real actors, thank you very much. Lara Croft, the Tomb Raider game star that was launched in 1996, became a movie character in 2001. In 2002, a simulated movie star played the lead in the science fiction movie Simone. In 2011, Japanese idol group AKB48 introduced a new memberAimi Eguchi. She became popular and was added to the band only to graduate when her identity as a composite of the bands other actors was revealed. By 2016, we had AI-generated influencers like Lil Miquela who appear in advertisements, garner thousands of followers, and are paid to endorse brands. And the precedent for Tilly signing with an agent has already been establishedMiquela signed on with CAA as its first virtual client as far back as 2020. View this post on Instagram A post shared by Tilly Norwood (@tillynorwood) Willful blind spots Now seemingly caught by surprise, what did the strategists in Hollywood miss? Most likely, too much focus on their own industry. Fractious labor relations, contract negotiations, and changing entertainment consumption behavior can eat up a lot of executive bandwidth. This leads to not thinking in terms of the larger arenas in which competition takes place. The big threat to this business was not other industry players but something coming along that made what they did unnecessary, undesirable, or too expensive. Once an innovation has demonstrated its efficacy, particularly if it is popular and making money for someone, it is almost impossible to put the genie back in its bottle (see: targeted Internet advertising or ride-sharing). It is also no secret that some moviemakers longed to put AI-generated actors in leading roles, even experimenting with bringing some back from the dead. But perhaps the most significant reason I believe they didnt pick up on the weak signals is because they didnt want to. Accepting the idea of digital acting and the creation of digital worlds means accepting the idea that expertise, talent, and painfully acquired skills will become obsolete. Unfortunately, the law of disruptionin which the complicated and difficult becomes easy and the expensive becomes cheapdoesnt really care about your preferences. Preparing for an existential threat What might they have done to prepare? They could have launched small-scale experiments using digital actors to learn about audience acceptance, production workflows, and creative possibilities. They could have allocated resources to dedicated teams exploring new forms of storytelling. With the constraints of physical acting and reality removed, stories could be developed that could be as revolutionary as the movies themselves were when they created new possibilities beyond what could be done on a physical stage. They could have worked with regulators and their unions to establish a glide path for AI in their sector that would be fair with respect to intellectual property. They could have seriously invested in the digital technologies used to create these new forms of entertainment, rather than leaving all this to technology companies such as Netflix. The end of mass market entertainment? Tilly Norwood isn’t the disruptionshe’s the warning shot. The real disruption comes when AI can generate not just actors, but entire films, on demand, personalized to individual viewer preferences, at essentially zero marginal cost. The studios that survive won’t be the ones with the biggest IP libraries or the most prestigious awards. They’ll be the ones who recognize that the fundamental assumptions of their industrythat content is scarce, that talent is human, that stories are fixedare all being systematically dismantled, and come up with new business models that take advantage of the post-inflection point world. The weak signals are there. The question is: who has the appetite to listen?
Category:
E-Commerce
Ukraine’s state security service has unveiled an upgraded sea drone it says can now operate anywhere in the Black Sea, carry heavier weapons and use artificial intelligence for targeting.Ukraine has used the unmanned naval drones to target Russian shipping and infrastructure in the Black Sea. The Security Service of Ukraine, known by its Ukrainian acronym SBU, has credited strikes by the unmanned vessel known as the “Sea Baby” with forcing a strategic shift in Russia’s naval operations.The range of the Sea Baby was expanded from 1,000 kilometers (620 miles) to 1,500 kilometers (930 miles), SBU said. It can carry up to 2,000 kilograms (about 4,400 pounds) of payload, SBU officials said.At a demonstration attended by The Associated Press, variants included vessels fitted with a multiple-rocket launcher and another with a stabilized machine-gun turret.SBU Brig. Gen. Ivan Lukashevych said the new vessels also feature AI-assisted friend-or-foe targeting systems and can launch small aerial attack drones and multilayered self-destruct systems to prevent capture. Developing a new kind of naval warfare Drone strikes have been used in successful attacks against 11 Russian vessels, including frigates and missile carriers, SBU said, prompting the Russian navy to relocate its main base from Sevastopol in Crimea to Novorossiysk on Russia’s Black Sea coast.“The SBU became the first in the world to pioneer this new kind of naval warfare and we continue to advance it,” Lukashevych said, adding that the Sea Baby has evolved from a single-use strike craft into a reusable, multipurpose platform that expands Ukraine’s offensive options.Authorities asked that the time and location of the demonstration not be made public for security reasons.The craft are operated remotely from a mobile control center inside a van, where operators use a bank of screens and controls.“Cohesion of the crew members is probably the most important thing. We are constantly working on that,” said one operator who was identified only by his call sign, “Scout,” per Ukrainian military protocol. Ukrainian sea drones helped push back Russia’s navy The SBU also said sea drones helped carry out other high-profile strikes, including repeated attacks on the Crimean Bridge, most recently targeting its underwater supports in a bid to to render it unusable for heavy military transport.The Sea Baby program is partially funded by public donations through a state-run initiative and is coordinated with Ukraine’s military and political leadership.The evolution from expendable strike boats to reusable, networked drones marks an important advance in asymmetric naval warfare, Lukashevych said.“On this new product, we have installed rocket weaponry that will allow us to work from a large distance outside of the attack range of enemy fire. We can use such platforms to carry heavy weaponry,” he said. “Here we can show Ukrainians the most effective use of the money they have donated to us.” Associated Press journalists Alex Babenko, Yehor Konovalov and Volodymyr Yurchuk contributed to this report. Follow AP’s coverage of the war in Ukraine at https://apnews.com/hub/russia-ukraine Efrem Lukatsky and Derek Gatopoulos, Associated Press
Category:
E-Commerce
Spending just 36 minutes listening to your own brain waves, over four sessions, can reduce stress and anxiety, according to a new study by neuroscientists at the Wake Forest University School of Medicine. Published in the journal Global Advances in Integrative Medicine and Health, the study looked at how to reduce stress-related symptoms in 144 healthcare workers with moderate-to-high levels of perceived stress. The healthcare workers were placed in two groups: one that received four sessions of a sound-based relaxation intervention over two weeks, and another that was put on a control group waitlist. The workers spent a little over half an hour relaxing in a zero-gravity chair with their eyes closed as closed-loop, acoustic neuromodulation technology translated their brainwaves into personalized tones in real time, the idea being that the echoed tones interact with the brain to balance and quiet itself and release stress patterns. When researchers measured the results after six-to-eight weeks, they found the participants reported meaningful reductions in stress, anxiety, and insomnia, as well as significant reductions in fatigue and depression, and improved subjective cognition. These results suggest that closed-loop acoustic neuromodulation is a safe, scalable, and effective option to complement organizational strategies for supporting healthcare worker brain health and well-being, said Charles H. Tegeler, M.D., professor of neurology at Wake Forest University School of Medicine and principal investigator of the study. We are eager to identify ways to broadly offer the intervention to teammates across our health system and beyond. What makes this different from previous studies on neuromodulation is that it streamlined the treatment process with fewer, shorter sessions, making the treatment more practical and accessible for real-world application. It also included study participants regardless of their medication or substance use.
Category:
E-Commerce
Prince Harry and his wife Meghan have joined prominent computer scientists, economists, artists, evangelical Christian leaders, and American conservative commentators Steve Bannon and Glenn Beck to call for a ban on AI superintelligence that threatens humanity. The letter, released Wednesday by a politically and geographically diverse group of public figures, is squarely aimed at tech giants like Google, OpenAI, and Meta Platforms that are racing each other to build a form of artificial intelligence designed to surpass humans at many tasks. The letter calls for a ban unless some conditions are met The 30-word statement says: We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in. In a preamble, the letter notes that AI tools may bring health and prosperity, but alongside those tools, “many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks. This has raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction. Who signed and what they’re saying about it Prince Harry added in a personal note that “the future of AI should serve humanity, not replace it. I believe the true test of progress will be not how fast we move, but how wisely we steer. There is no second chance. Signing alongside the Duke of Sussex was his wife Meghan, the Duchess of Sussex. This is not a ban or even a moratorium in the usual sense,” wrote another signatory, Stuart Russell, an AI pioneer and computer science professor at the University of California, Berkeley. “Its simply a proposal to require adequate safety measures for a technology that, according to its developers, has a significant chance to cause human extinction. Is that too much to ask? Also signing were AI pioneers Yoshua Bengio and Geoffrey Hinton, co-winners of the Turing Award, computer science’s top prize. Hinton also won a Nobel Prize in physics last year. Both have been vocal in bringing attention to the dangers of a technology they helped create. But the list also has some surprises, including Bannon and Beck, in an attempt by the letter’s organizers at the nonprofit Future of Life Institute to appeal to President Donald Trump’s Make America Great Again movement even as Trump’s White House staff has sought to loosen restrictions on AI development in the U.S. Also on the list are Apple co-founder Steve Wozniak; British billionaire Richard Branson; the former Chairman of the U.S. Joint Chiefs of Staff Mike Mullen, who served under Republican and Democratic administrations; and Democratic foreign policy expert Susan Rice, who was national security adviser to President Barack Obama. Former Irish President Mary Robinson and several British and European parliamentarians and former members of the U.S. Congress signed, as did actors Stephen Fry and Joseph Gordon-Levitt, and musician will.i.am, who has otherwise embraced AI in music creation. Yeah, we want specific AI tools that can help cure diseases, strengthen national security, etc.,” wrote Gordon-Levitt, whose wife Tasha McCauley served on OpenAI’s board of directors before the upheaval that led to CEO Sam Altman’s temporary ouster in 2023. But does AI also need to imitate humans, groom our kids, turn us all into slop junkies, and make zillions of dollars serving ads? Most people dont want that. Are worries about AI superintelligence also feeding AI hype? The letter is likely to provoke ongoing debates within the AI research community about the likelihood of superhuman AI, the technical paths to reach it, and how dangerous it could be. In the past, its mostly been the nerds versus the nerds, said Max Tegmark, president of the Future of Life Institute and a professor at the Massachusetts Institute of Technology. I feel what were really seeing here is how the criticism has gone very mainstream. Complicating the broader debates is that the same companies that are striving toward what some call superintelligence and others call artificial general intelligence, or AGI, are also sometimes inflating the capabilities of their products, which can make them more marketable and have contributed to concerns about an AI bubble. OpenAI was recently met with ridicule from mathematicians and AI scientists when its researcher claimed ChatGPT had figured out unsolved math problems when what it really did was find and summarize what was already online. Theres a ton of stuff thats overhyped and you need to be careful as an investor, but that doesnt change the fact that zooming out AI has gone much faster in the last four years than most people predicted, Tegmark said. Tegmark’s group was also behind a March 2023 letter still in the dawn of a commercial AI boom that called on tech giants to pause the development of more powerful AI models temporarily. None of the major AI companies heeded that call. And the 2023 letter’s most prominent signatory, Elon Musk, was at the same time quietly founding his own AI startup to compete with those he wanted to take a 6-month pause. Asked if he reached out to Musk again this time, Tegmark said he wrote to the CEOs of all major AI developers in the U.S. but didn’t expect them to sign. I really empathize for them, frankly, because theyre so stuck in this race to the bottom that they just feel an irresistible pressure to keep going and not get overtaken by the other guy, Tegmark said. I think thats why its so important to stigmatize the race tosuperintelligence, to the point where the U.S. government just steps in. Google, Meta, OpenAI, and Musk’s xAI didn’t immediately respond to requests for comment Wednesday. Matt O’Brien, AP technology writer
Category:
E-Commerce
Sites : [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27] [28] [29] next »