|
The AI landscape is rapidly evolving, with America’s $500 billion Stargate Project signaling massive infrastructure investment while China’s DeepSeek emerges as a formidable competitor. DeepSeek’s advanced AI models, rivaling Western capabilities at lower costs, raise significant concerns about potential cybersecurity threats, data mining, and intelligence gathering on a global scale. This development highlights the urgent need for robust AI regulation and security measures in the U.S. As the AI race intensifies, the gap between technological advancement and governance widens. The U.S. faces the critical challenge of not only accelerating its AI capabilities through projects like Stargate but also developing comprehensive regulatory frameworks to protect its digital assets and national security interests. With DeepSeek’s potential to overcome export controls and conduct sophisticated cyber operations, the U.S. must act swiftly to ensure its AI innovations remain secure and competitive in this rapidly changing technological landscape. We have already seen the first wave of AI-powered dangers. Deepfakes, bot accounts, and algorithmic manipulation on social media have all helped undermine social cohesion while contributing to the creation of political echo chambers. But these dangers are childs play compared to the risks that will emerge in the next five to ten years. During the pandemic, we saw the unparalleled speed with which new vaccines could be developed with the help of AI. As Mustafa Suleyman, founder of DeepMind and now CEO of Microsoft AI, has argued, it will not be long before AI can design new bioweapons with equal speed. And these capabilities will not be confined to state actors. Just as modern drone technology has recently democratized access to capabilities that were once the sole province of the military, any individual with even a rudimentary knowledge of coding will soon be able to weaponize AI from their bedroom at home. The fact that U.S. senators were publicly advocating the shooting down of unmanned aircraft systems, despite the lack of any legal basis for doing so, is a clear sign of a systemic failure of control. This failure is even more concerning than the drone sightings themselves. When confidence in the governments ability to handle such unexpected events collapses, the result is fear, confusion, and conspiratorial thought. But there is much worse to come if we fail to find new ways to regulate novel technologies. If you think the systemic breakdown in response to drone sightings is worrying, imagine how things will look when AI starts causing problems. Seven years spent helping the departments of Defense and Homeland Security with innovation and transformation (both organizational and digital) has shaped my thinking about the very real geopolitical risks that AI and digital technologies bring with them. But these dangers do not come only from outside our country. The past decade has seen an increasing tolerance among many U.S. citizens for the idea of political violence, a phenomenon that has been cast into particularly vivid relief in the wake of the shooting of United Healthcare CEO Brian Thompson. As automation replaces increasing numbers of jobs, it is entirely possible that a wave of mass unemployment will lead to severe unrest, multiplying the risk that AI will be used as a weapon to lash out at society at large. These dangers will be on our doorsteps soon. But even more concerning are the unknown unknowns. AI is developing at lightning speed, and even those responsible for that development have no idea exactly where we will end up. Nobel laureate Geoffrey Hinton, the so-called Godfather of AI, has said there is a significant chance that artificial intelligence will wipe out humanity within just 30 years. Others suggest that the time horizon is much narrower. The simple fact that there is so much uncertainty about the direction of travel should concern us all deeply. Anyone who is not at least worried has simply not thought hard enough about the dangers. The regimented regulation has to be risk-based We cannot afford to treat AI regulation in the same haphazard fashion that has been applied to drone technology. We need an adaptable, far-reaching and future-oriented approach to regulation that is designed to protect us from whatever might emerge as we push back the frontiers of machine intelligence. During a recent interview with Senator Richard Blumenthal, I discussed the question of how we can effectively regulate a technology that we do not yet fully understand. Blumenthal is the co-author with Senator Josh Hawley of the Bipartisan Framework for U.S. AI Act, also known as the Blumenthal-Hawley Framework. Blumenthal proposes a relatively light-touch approach, suggesting that the way the U.S. government regulates the pharmaceutical industry can serve as a model for our approach to AI. This approach, he argues, provides for strict licensing and oversight of potentially dangerous emerging technologies without placing undue restrictions on the ability of American companies to remain world leaders in the field. “We don’t want to stifle innovation,” Blumenthal says. “That’s why the regimented regulation has to be risk-based. If it doesn’t pose a risk, we don’t need a regulator.” This approach offers a valuable starting point for discussion, but I believe we need to go further. While a pharmaceutical model may be sufficient for regulating corporate AI development, we also need a framework that will limit the risks posed by individuals. The manufacturing and distribution of pharmaceuticals requires significant infrastructure, but computer code is an entirely different beast that can be replicated endlessly and transmitted anywhere on the planet in a fraction of a second. The possibility of problematic AI being created and leaking out into the wild is simply much higher than is the case for new and dangerous drugs. Given the potential for AI to generate extinction-level outcomes, it is not too far-reaching to say that the regulatory frameworks surrounding nuclear weapons and nuclear energy are more appropriate for this technology than those that apply in the drug industry. The announcement of the Stargate Project adds particular urgency to this discussion. While massive private-sector investment in AI infrastructure is crucial for maintaining American technological leadership, it also accelerates the timeline for developing comprehensive regulatory frameworks. We cannot afford to have our regulatory responses lag years behind technological developments when those developments are being measured in hundreds of billions of dollars. However we choose to balance the risks and rewards of AI research, we need to act soon. As we saw with the drone sightings that took place before Christmas, the lack of a comprehensive and cohesive framework for managing the threats from new technologies can leave government agencies paralyzed. And with risks that take in anything up to and including the extinction of humanity, we cannot afford this kind of inertia and confusion. We need a comprehensive regulatory framework that balances innovation with safety, one that recognizes both AI’s ransformative potential and its existential dangers. That means: Promoting responsible innovation. Encouraging the development and deployment of AI technologies in critical sectors in a safe and ethical manner. Establishing robust regulations. Public trust in AI systems requires both clear and enforceable regulatory frameworks and transparent systems of accountability. Strengthening national security. Policymakers must leverage AI to modernize military capabilities, deploying AI solutions that predict, detect, and counter cyber threats while ensuring ethical use of autonomous systems. Investing in workforce development. As a nation, we must establish comprehensive training programs that upskill workers for AI-driven industries while enhancing STEM (science, technology, engineering, and math) education to build foundational AI expertise among students and professionals. Leading in global AI standards. The U.S. must spearhead efforts to establish global norms for AI use by partnering with allies to define ethical standards and to safeguard intellectual property. Addressing public concerns. Securing public trust in AI requires increasing transparency about the objectives and applications of AI initiatives while also developing strategies to mitigate job displacement and ensure equitable economic benefits. The Stargate investment represents both the promise and the challenge of AI development. While it demonstrates America’s potential to lead the next technological revolution, it also highlights the urgent need for regulatory frameworks that can match the pace and scale of innovation. With investments of this magnitude reshaping our technological landscape, we cannot afford to get this wrong. We may not get a second chance.
Category:
E-Commerce
AI rivalry heats up: Glean CEO Arvind Jain replies to Sam Altmans caution to investors.
Category:
E-Commerce
Global sustainability models are failing. Theyve been designed to showcase ethical trade and environmental responsibility, but they fundamentally misunderstand how global supply chains operateespecially the critical, unseen work at the beginning of essential value chains such as critical minerals. For decades, these models have burdened African merchants, miners, and farmersthe backbone of global industries from cocoa to lithiumwhile corporations further along the chain claim the benefits. The systems celebrate end products, like sleek electric vehicles (EVs) or iPhones, while ignoring the heavy lifting at the start of the work, where its most difficult. This imbalance in sustainability frameworks doesnt just sideline African businesses. It undermines the entire premise of accountability that we want to engender amongst commercial supply chain stakeholders. The unfair burden on the start of the supply chain The reality of global supply chains is simple: The earliest stages, where raw materials are extracted and processed, require the most effort. African farmers, miners, and merchants are at the very heart of these early stages. Theyre the ones putting in the hardest workextracting resources, growing crops, and preparing raw materials that fuel industries around the world. But despite their essential role, theyre stuck carrying the heaviest burden. Strict regulations and sustainability requirements often hit them the hardest, even though they have the fewest resources to meet these demands. Take cocoa farmers in Africa, for instance. Many are already working on tight margins, struggling to make enough to feed their families. Then along comes the European Unions Deforestation Regulation (EUDR), which demands proof that their cocoa isnt linked to deforestation. While the goal is noble, the execution has left these farmers scrambling to provide documentation theyve never needed before. For many, the cost of compliance is just too high, and failing to meet the standards means losing access to international buyers. Its not just farmers. In the mining sector, lithiumthe critical ingredient for EV batteriesis dug up under tough, often dangerous conditions. The raw material is shipped overseas for refining and manufacturing, where the final product becomes a celebrated symbol of sustainability. But little thought is given to the people who made that product possible in the first place. But instead of recognizing the environmental and social costs borne by African miners, global narratives around green batteries conveniently ignore this reality. The hard work is erased, and the end producta shiny new electric vehiclebecomes the hero of the story. Why these models dont work The deeper issue is that global sustainability models were never designed with supply chain realities in mind. They were built to make sense on paper, not in practice. Heres why they fail: They ignore the realities of extraction The first stages of the supply chainextraction and initial processingare treated as a liability, not a foundation. These stages are overregulated, under-supported, and painted as inherently dirty, while the later stages enjoy the benefits of cleaner reputations and fewer demands. They push costs downstream Compliance costs are overwhelmingly placed on the smallest and least resourced players. Farmers, artisanal miners, and small merchants are expected to shoulder the expense of meeting global benchmarks, while corporations further up the chain avoid their fair share of responsibility. They celebrate the end, not the beginning By the time raw materials are turned into recognizable productslike the chocolate bars we enjoy or the batteries that power electric vehiclestheyre celebrated as symbols of innovation and progress. But the reality behind those products is far less glamorous. The hard work, long hours, and sacrifices made at the start of the supply chain are often ignored. At best, theyre reduced to a footnote; at worst, theyre treated as inconvenient details in the story of sustainability. Rebalance the equation If sustainability is going to workfor people and the planetwe need to rethink these frameworks entirely. That means starting from the ground up, ensuring fairness across every step of the supply chain. Heres where the change needs to happen: Stop pushing the costs on producers Sustainability cant come at the expense of the people doing the hardest work. Corporations that depend on African resources need to take responsibility for compliance costs. For example, chocolate companies that rely on African cocoa should be actively investing in the farmers and cooperatives that keep their supply chains running. Its not just a moral obligationits a business necessity. Put money into local solutions The earliest stages of the supply chain need better support. This means governments, corporations, and international institutions must work together to invest in systems that help producers succeed. From building cooperatives for artisanal miners to funding training programs for sustainable farming, these investments would ease the pressure on producers while ensuring global standards can actually be met. Measure what really matters Current sustainability metrics focus too much on quick wins and shiny results. But real progress happens when we focus on achievable, incremental improvements. Instead of setting impossible benchmarks, we need to create standards that reflect the realities of resource extraction and reward meaningful change. Work together to share the load No single entity can fix this alone. Public-private partnerships are key to amplifying sustainability efforts without placing all the costs on producers. Companies that actively work with merchants to address issues like traceability and compliance have already shown that fair, sustainable practices are possibleespecially when governments step in to support these efforts. A fairer vision for sustainability Sustainability should not mean shifting the burden onto the communities that sustain the worlds supply chains. African merchants, farmers, and miners are not just resource providersthey are the backbone of industries that drive global progress. They deserve recognition, support, and a fair share of the benefits. Global sustainability models need to changeurgently. If they dont, theyll keep fueling inequality while claiming to promote progress. Its time to stop pretending that these systems are working, because theyre not. We need to build frameworks that reflect the real-world challenges of supply chains, ones that are fair, practical, and genuinely sustainablefor everyone involved. Anu Adedoyin Adasolum is CEO of Sabi.
Category:
E-Commerce
All news |
||||||||||||||||||
|