|
Even tech giant Apple couldn’t prevent its artificial intelligence from making things up. Last month, the company suspended its AI-powered news alert feature after it falsely claimed a murder suspect had shot himself, one of several fabricated headlines that appeared under trusted news organizations’ logos. The embarrassing pullback came despite Apple’s vast resources and technical expertise. Most users probably weren’t fooled by the more obvious errors, but the incident highlights a growing challenge. Companies are racing to integrate AI into everything from medical advice to legal documents to financial services, often prioritizing speed over safety. Many of these applications push the technology beyond its current capabilities, creating risks that aren’t always obvious to users. “The models are not failing,” says Maria De-Arteaga, an assistant professor at the University of Texas at Austin McCombs School of Business. “We’re deploying the models for things that they’re not fit for purpose. As the technology becomes more embedded in daily life, researchers and educators face two distinct hurdles: teaching people to use these tools responsibly rather than over-relying on them while also convincing AI skeptics to learn enough about the technology to be informed citizens, even if they choose not to use it. The goal isn’t simply to try to “fix” the AI, but to learn its shortcomings and develop the skills to use it wisely. It’s reminiscent of how early internet users had to learn to navigate online information, eventually understanding that while Wikipedia might be a good starting point for research, it shouldn’t be cited as a primary source. Just as digital literacy became essential for participating in modern democracy, AI literacy is becoming fundamental to understanding and shaping our future. At the heart of these AI mishaps are the hallucinations and distortions that lead AI models to generate false information with seeming confidence. The problem is pervasive: In one 2024 study, chatbots got basic academic citations wrong between 30% and 90% of the time, mangling paper titles, author names, and publication dates. While tech companies promise these hallucinations can be tamed through better engineering, De-Arteaga says researchers are finding that they may be fundamental to how the technology works. She points to a paper from OpenAIthe same company that partnered with Apple for news summarizationwhich concluded that well-calibrated language models must hallucinate as part of their creative process. If they were constrained to only produce factual information, they would cease to function effectively. “From a mathematical and technical standpoint, this is what the models are designed to do,” De-Arteaga says. Teaching literacy As researchers acknowledge that AI hallucinations are inevitable and humans naturally tend to put too much trust in machines, educators and employers are stepping in to teach people how to use these tools responsibly. California recently passed a law requiring AI literacy to be incorporated into K-12 curricula starting this fall. And the European Unions AI Act, which went into effect on February 5, requires organizations that use AI in their products to implement AI literacy programs. “AI literacy is incredibly important right now, especially as we’re trying to figure out what are the policies, what are the boundaries, what do we want to accept as the new normal,” says Victor Lee, an associate professor in the Graduate School of Education at Stanford University. “Right now, people who know more speak really confidently and are able to direct things, and there needs to be more societal consensus.” Lee sees parallels to how society adapted to previous technologies. “Think about calculatorsto this day, there are still divides about when to use a calculator in K-12, how much you should know versus how much the calculator should be the source of things,” he says. “With AI, we’re having that same conversation often with writing as the example.” Under California’s new law, AI literacy education must include understanding how AI systems are developed and trained, their potential impacts on privacy and security, and the social and ethical implications of AI use. The EU goes further, requiring companies that produce AI products to train applicable staff to have the “skills, knowledge and understanding that allow providers, deployers and affected persons . . . to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause. Both frameworks emphasize that AI literacy isn’t just technical knowledge but about developing critical thinking skills to evaluate AI’s appropriate use in different contexts. Amid a marketing onslaught by Big Tech companies, the challenge facing educators is complex. Recent research published in the Journal of Marketing shows that people with less understanding of AI are actually more likely to embrace the technology, viewing it as almost magical. The researchers say this lower literacy-higher receptivity link suggests that companies may benefit from shifting their marketing efforts and product development towards consumers with lower AI literacy. The goal isn’t to dampen openness to new technology, educators say, but to combine it with critical thinking skills that help people understand both AI’s potential and its limitations. Thats especially important for people who tend to lack access to the technology, or who are simply skeptical or fearful about AI. For Lee, successful AI literacy requires seeing through the magic. “The anxiety and uncertainty feeds a lot of the skepticism and doubt or non-willingness to even try AI,” he says. “Seeing that AI is actually a bunch of different things, and not a sentient, talking computer, and that it’s not even really talking, but just spitting out patterns that are appropriate, is part of what AI literacy would help to instill.” At the City University of New York, Luke Waltzer, director of the Teaching and Learning Center at the schools Graduate Center, is leading a project to help faculty develop approaches for teaching AI literacy within their disciplines. “Nothing about their adoption or their integration into our ways of thinking is inevitable,” Waltzer says. “Students need to understand that these tools have a material basisthey’re made by men and women, they have labor implications, they have an ecological impact.” The project, backed by a $1 million grant from Google, will work with 75 professors over three years to develop teaching methods that examine AI’s implications across different fields. Materials and tools developed through the project will be distributed publicly so other educators can benefit from CUNYs work. “We’ve seen the hype cycles around massively open online courses that were going to transform education,” Waltzer says. “Generative AI is distinct from some of those trends, but there’s definitely a lot o hype. Three years lets things settle. We will be able to see the future more clearly.” Such initiatives are spreading rapidly across higher education. The University of Florida aims to integrate AI into every undergraduate major and graduate program. Barnard College has created a “pyramid” approach that gradually builds students’ AI literacy from basic understanding to advanced applications. At Colby College, a private liberal arts college in Maine, students are beefing up their literacy with the use of a custom portal that lets them test and compare different chatbots. Around 100 universities and community colleges have launched AI credentials, according to research from the Center for Security and Emerging Technology, with degree conferrals in AI-related fields increasing 120% since 2011. Beyond the classroom For most people, learning to navigate AI means sorting through corporate marketing claims with little guidance. Unlike students who will soon have formal AI education, adults must figure out on their own when to trust these increasingly prevalent toolsand when they’re being oversold by companies eager to recoup massive AI investments. This self-directed learning is happening quickly: LinkedIn found that workers are adding AI literacy skills such as prompt engineering and proficiencies with tools like ChatGPT at nearly five times the rate of other professional skills. As universities and lawmakers try to keep up, tech companies are offering their own classes and certifications. Nvidia recently announced a partnership with California to train 100,000 students, educators, and workers in AI, while companies like Google and Amazon Web Services offer their own AI certification programs. Intel aims to train 30 million people in AI skills by 2030. In addition to free online AI skills courses offered by institutions like Harvard University and the University of Pennsylvania, people can also learn AI basics from companies like IBM, Microsoft, and Google. “AI literacy is like digital literacyit’s a thing,” De-Arteaga says. “But who should teach it? Meta and Google would love to be teaching you their view of AI.” Instead of relying on companies with a vested interest in selling you on AIs utility, Hare suggests starting with AI tools in areas where you have expertise, so you can recognize both their utility and limitations. A programmer might use AI to help write code more efficiently while being able to spot bugs and security issues that a novice would miss. The key is combining hands-on experience with guidance from trusted third parties who can provide unbiased information about AI’s capabilities, particularly in high-stakes areas like healthcare, finance, and defense. “AI literacy isn’t just about how a model works or how to create a dataset,” she says. “It’s about understanding where AI fits in society. Everyonefrom kids to retireeshas a stake in this conversation, and we need to capture all those perspectives.
Category:
E-Commerce
The AI landscape is rapidly evolving, with America’s $500 billion Stargate Project signaling massive infrastructure investment while China’s DeepSeek emerges as a formidable competitor. DeepSeek’s advanced AI models, rivaling Western capabilities at lower costs, raise significant concerns about potential cybersecurity threats, data mining, and intelligence gathering on a global scale. This development highlights the urgent need for robust AI regulation and security measures in the U.S. As the AI race intensifies, the gap between technological advancement and governance widens. The U.S. faces the critical challenge of not only accelerating its AI capabilities through projects like Stargate but also developing comprehensive regulatory frameworks to protect its digital assets and national security interests. With DeepSeek’s potential to overcome export controls and conduct sophisticated cyber operations, the U.S. must act swiftly to ensure its AI innovations remain secure and competitive in this rapidly changing technological landscape. We have already seen the first wave of AI-powered dangers. Deepfakes, bot accounts, and algorithmic manipulation on social media have all helped undermine social cohesion while contributing to the creation of political echo chambers. But these dangers are childs play compared to the risks that will emerge in the next five to ten years. During the pandemic, we saw the unparalleled speed with which new vaccines could be developed with the help of AI. As Mustafa Suleyman, founder of DeepMind and now CEO of Microsoft AI, has argued, it will not be long before AI can design new bioweapons with equal speed. And these capabilities will not be confined to state actors. Just as modern drone technology has recently democratized access to capabilities that were once the sole province of the military, any individual with even a rudimentary knowledge of coding will soon be able to weaponize AI from their bedroom at home. The fact that U.S. senators were publicly advocating the shooting down of unmanned aircraft systems, despite the lack of any legal basis for doing so, is a clear sign of a systemic failure of control. This failure is even more concerning than the drone sightings themselves. When confidence in the governments ability to handle such unexpected events collapses, the result is fear, confusion, and conspiratorial thought. But there is much worse to come if we fail to find new ways to regulate novel technologies. If you think the systemic breakdown in response to drone sightings is worrying, imagine how things will look when AI starts causing problems. Seven years spent helping the departments of Defense and Homeland Security with innovation and transformation (both organizational and digital) has shaped my thinking about the very real geopolitical risks that AI and digital technologies bring with them. But these dangers do not come only from outside our country. The past decade has seen an increasing tolerance among many U.S. citizens for the idea of political violence, a phenomenon that has been cast into particularly vivid relief in the wake of the shooting of United Healthcare CEO Brian Thompson. As automation replaces increasing numbers of jobs, it is entirely possible that a wave of mass unemployment will lead to severe unrest, multiplying the risk that AI will be used as a weapon to lash out at society at large. These dangers will be on our doorsteps soon. But even more concerning are the unknown unknowns. AI is developing at lightning speed, and even those responsible for that development have no idea exactly where we will end up. Nobel laureate Geoffrey Hinton, the so-called Godfather of AI, has said there is a significant chance that artificial intelligence will wipe out humanity within just 30 years. Others suggest that the time horizon is much narrower. The simple fact that there is so much uncertainty about the direction of travel should concern us all deeply. Anyone who is not at least worried has simply not thought hard enough about the dangers. The regimented regulation has to be risk-based We cannot afford to treat AI regulation in the same haphazard fashion that has been applied to drone technology. We need an adaptable, far-reaching and future-oriented approach to regulation that is designed to protect us from whatever might emerge as we push back the frontiers of machine intelligence. During a recent interview with Senator Richard Blumenthal, I discussed the question of how we can effectively regulate a technology that we do not yet fully understand. Blumenthal is the co-author with Senator Josh Hawley of the Bipartisan Framework for U.S. AI Act, also known as the Blumenthal-Hawley Framework. Blumenthal proposes a relatively light-touch approach, suggesting that the way the U.S. government regulates the pharmaceutical industry can serve as a model for our approach to AI. This approach, he argues, provides for strict licensing and oversight of potentially dangerous emerging technologies without placing undue restrictions on the ability of American companies to remain world leaders in the field. “We don’t want to stifle innovation,” Blumenthal says. “That’s why the regimented regulation has to be risk-based. If it doesn’t pose a risk, we don’t need a regulator.” This approach offers a valuable starting point for discussion, but I believe we need to go further. While a pharmaceutical model may be sufficient for regulating corporate AI development, we also need a framework that will limit the risks posed by individuals. The manufacturing and distribution of pharmaceuticals requires significant infrastructure, but computer code is an entirely different beast that can be replicated endlessly and transmitted anywhere on the planet in a fraction of a second. The possibility of problematic AI being created and leaking out into the wild is simply much higher than is the case for new and dangerous drugs. Given the potential for AI to generate extinction-level outcomes, it is not too far-reaching to say that the regulatory frameworks surrounding nuclear weapons and nuclear energy are more appropriate for this technology than those that apply in the drug industry. The announcement of the Stargate Project adds particular urgency to this discussion. While massive private-sector investment in AI infrastructure is crucial for maintaining American technological leadership, it also accelerates the timeline for developing comprehensive regulatory frameworks. We cannot afford to have our regulatory responses lag years behind technological developments when those developments are being measured in hundreds of billions of dollars. However we choose to balance the risks and rewards of AI research, we need to act soon. As we saw with the drone sightings that took place before Christmas, the lack of a comprehensive and cohesive framework for managing the threats from new technologies can leave government agencies paralyzed. And with risks that take in anything up to and including the extinction of humanity, we cannot afford this kind of inertia and confusion. We need a comprehensive regulatory framework that balances innovation with safety, one that recognizes both AI’s ransformative potential and its existential dangers. That means: Promoting responsible innovation. Encouraging the development and deployment of AI technologies in critical sectors in a safe and ethical manner. Establishing robust regulations. Public trust in AI systems requires both clear and enforceable regulatory frameworks and transparent systems of accountability. Strengthening national security. Policymakers must leverage AI to modernize military capabilities, deploying AI solutions that predict, detect, and counter cyber threats while ensuring ethical use of autonomous systems. Investing in workforce development. As a nation, we must establish comprehensive training programs that upskill workers for AI-driven industries while enhancing STEM (science, technology, engineering, and math) education to build foundational AI expertise among students and professionals. Leading in global AI standards. The U.S. must spearhead efforts to establish global norms for AI use by partnering with allies to define ethical standards and to safeguard intellectual property. Addressing public concerns. Securing public trust in AI requires increasing transparency about the objectives and applications of AI initiatives while also developing strategies to mitigate job displacement and ensure equitable economic benefits. The Stargate investment represents both the promise and the challenge of AI development. While it demonstrates America’s potential to lead the next technological revolution, it also highlights the urgent need for regulatory frameworks that can match the pace and scale of innovation. With investments of this magnitude reshaping our technological landscape, we cannot afford to get this wrong. We may not get a second chance.
Category:
E-Commerce
AI rivalry heats up: Glean CEO Arvind Jain replies to Sam Altmans caution to investors.
Category:
E-Commerce
All news |
||||||||||||||||||
|