|
When John Eng started studying the poisonous venom of the Gila monster in the early 1990s, it wasnt immediately clear if the research would lead somewhere. But Eng, a physician and a researcher who was working at the Veterans Administration Medical Center at the time, wanted to build on previous research that showed that the venom of some animals could potentially control blood sugar in humans, helping treat diabetes. He discovered a compound in the venom that mimicked a human hormone and licensed it to a pharmaceutical company for drug development. After more R&D, the discovery eventually led to GLP-1 drugs including Ozempic, the blockbuster diabetes and weight-loss medicine. The drugs can have severe side effects, and arent perfect. But they could also save tens of thousands of lives in the U.S. Its one of many examples of how obscure fundamental research, funded by the government, leads to pharmaceutical innovation. (In Engs case, the research was funded by the Department of Veterans Affairs, and some of the research he built on was funded by the National Institutes of Health.) And it illustrates how the cuts that the Trump administration is trying to make to NIH funding would slow down medical progress. Fundamental research is kind of the pacesetter of technical progress, says Pierre Azoulay, a professor at MIT Sloan School of Management who studies technological innovation. In a study, Azoulay found that 31% of NIH grants produce articles that are later cited by private-sector pharma patents. Were putting one dollar in and we get many, many, many more dollars out, he says. Its just that were not getting it next year. Were getting it over the next five, 10, 15, 20, 25 years. Things take a long time to percolate through the economy. But we are getting those benefits. Last Friday, the NIH announced that it was slashing funding for indirect costs in research grants and capping it at 15%, down from 40% to 60% at most institutions. That money covers the infrastructure that makes research possible, from building labs, paying electric bills, and setting up IT infrastructure, to paying administrative staff. Its so fundamental to how the system works that if the cuts stay in place, researchers say it would be catastrophic. The horrified reaction of people in academia . . . isnt hyperbole, Azoulay says. 15% would really be like the sky falling down. Thats not to say that the current system couldnt be more efficient, he says. Some of the indirect costs now come from NIHs own policy for grant recipients to fill out paperwork and comply with a long list of requirements. The whole system, which has been in place since World War II, is in serious need of reform, maybe even radical reform, he says. But radical reform is not what happened a few days ago. It was like, shoot first and aim later. In theory, pharmaceutical companies could do more basic research themselves. But they obviously have different incentives than researchers at a university or other independent labs. Drug companies might be less likely to pursue something like the Gila monster research. Fundamental research “is not tied to any particular product, necessarily,” says Azoulay. “It can be, in rare cases. But most often, it’s undertaken for lots of reasons. Sometimes it’s usefulness, but sometimes just curiosity. You don’t know if or when or where it’s going to be useful. So the private sector is not going to do it.” If a drug company makes a discovery that could also benefit their competitors, they might be less likely to pursue it. Academics, on the other hand, want to share their discoveries as widely as possible. Having multiple sources of funding for R&Dsome from the private sector, and some from philanthropy, but mostly support from the federal government for universitieshas made the U.S. the leader in biomedical innovation. For decades, the basic system hasn’t really been politically controversial. Support for fundamental research has been “a bedrock principle of U.S. government policy,” Azoulay says. “I would say that up until last week I would have thought that’s a bipartisan point of agreement.” After 22 states filed a lawsuit, arguing that the cuts would stop clinical trials and cause immediate layoffs, a judge temporarily blocked the changes, with a hearing to come on February 21. Other lawsuits are pending. Legally, the Trump administration shouldn’t be able to make the cuts: Congress explicitly banned NIH from making changes to how indirect costs are determined without prior approval. The Trump administration is likely to keep fighting to reduce funding. Part of the motivation is, undoubtedly, to hurt universities. “This would be really bad for the institutions that do research, which I sometimes think is exactly the point,” says Azoulay. “They want to make Harvard cry.” But the long-term effect would be to dramatically slow down the pace of innovation in health in the U.S. That effect won’t show up immediately, but will eventually be significant. “It’s like if you have a contractor come into your house and start hacking away at walls without looking at the building plans,” Azoulay says. “The house doesn’t fall down immediately. But you’re taking a big chance and it might actually fall down later on.”
Category:
E-Commerce
Workweeks can go by in a flash. Starting a day can feel like getting on a roller coaster. Strap in, and almost before you can blink, the day is over. And then it is time to start it again. Because you can get immersed in the chaos of the day so quickly, the momentary emotions you experience as you move from one task to another probably get lost in the shuffle. As Barbara Fredrickson and Daniel Kahneman pointed out, though, most of our lives are really experienced through our memories of events rather than the moment of those events themselves. Paradoxically, then, you want to think about how to create memories of a happy work life rather than maximizing the happiness youre experiencing in the moment. As an analogy to better understand why this approach works, think about your experience of the passage of time. In the moment, time seems longest when you are bored and can pay attention to the passage of time. But then when you are looking back at things, time seems longest when you are doing the most new things that serve as landmarks in your memory for time that has passed. So, days that seem long dont feel long when you look back on them, but days that fly by may seem long in memory. Understand the paradox of ambition You are energized by dissatisfaction. Engaging in a goal to perform an action requires that there is something you would like to achieve or something you would like to avoid and you have not yet succeeded. After all, if you have everything you want in life, there is no reason to do anything. One place that manifests at work is in the desire for promotion and recognition. When you aspire to another role or greater responsibility, you derive your motivational energy from being dissatisfied with your current situation. That can cause you to focus on aspects of your current role or employer that are less than ideal. On a day-to-day basis, then, your ambition is going to make you feel less positively about work than you would if you were satisfied with your role. That doesnt mean that you cant be happy if youre ambitious. You have to find your satisfaction by looking at your trajectory rather than at your current state. Feel good about improvements in your skills and the things you have accomplished. Focus on the relationships you have developed at work and the impact your work is having. By expanding your time horizon for thinking about your work, you can enable yourself to be both ambitious and also pleased with your progress. Celebrate your wins Because you probably have a lot on your to-do list, it is common to complete a task and immediately move on to the next thing. As a result, you focus on the intensity of the work youre doing, but dont have an opportunity to reflect on the value of something you have finished. Take the time to celebrate the wins you participate in. When a client signs a contract, a sale closes, or a report gets distributed to a big audience, take a little victory lap. Reflect on the impact that your work is having on your organization and the people it serves. Those few moments you spend in celebration will help you to remember the important influence your work has on the success of your team and your organization. That will increase your overall satisfaction with your work. Look for joy moments Sometimes, there arent natural chances to celebrate a particular win. That doesnt mean that you arent contributing to important positive outcomes. You may have to seek out chances to enjoy the work youre doing and its impact. If you have a really enjoyable and productive meeting with a team, call it out at the end. Talk about how much you enjoy the time you spend with them. If your work contributes to positive outcomes you dont see directly, find ways to acknowledge those as well. I encourage the staff I work with at the University of Texas to walk outside during the busy times of the semester to remind themselves that the work they do is contributing to the college experiences of so many students. While they may not see the direct influence of a specific project on students, without this collective effort, those blissful college years would not be as rich. Celebrate your colleagues Often (particularly if you are ambitious) you may treat the accolades and successes in your organization as a zero-sum gamemeaning that if someone else hits a home run or gets acknowledged for their contribution, then that diminishes your own standing. But that’s simply not true. You and your colleagues are all on the same team. If a colleague gets a promotion, lands a big sale, or solves a huge problem, celebrate their efforts. Take pride in being on the team with other talented people. This shared joy in the successes of others creates a sense of camaraderie that brings satisfaction to your work. It also lays the groundwork for other people to share genuinely in your successes. After all, the world could always use a little more celebration. Embrace every opportunity to share the joy of your community.
Category:
E-Commerce
Even tech giant Apple couldn’t prevent its artificial intelligence from making things up. Last month, the company suspended its AI-powered news alert feature after it falsely claimed a murder suspect had shot himself, one of several fabricated headlines that appeared under trusted news organizations’ logos. The embarrassing pullback came despite Apple’s vast resources and technical expertise. Most users probably weren’t fooled by the more obvious errors, but the incident highlights a growing challenge. Companies are racing to integrate AI into everything from medical advice to legal documents to financial services, often prioritizing speed over safety. Many of these applications push the technology beyond its current capabilities, creating risks that aren’t always obvious to users. “The models are not failing,” says Maria De-Arteaga, an assistant professor at the University of Texas at Austin McCombs School of Business. “We’re deploying the models for things that they’re not fit for purpose. As the technology becomes more embedded in daily life, researchers and educators face two distinct hurdles: teaching people to use these tools responsibly rather than over-relying on them while also convincing AI skeptics to learn enough about the technology to be informed citizens, even if they choose not to use it. The goal isn’t simply to try to “fix” the AI, but to learn its shortcomings and develop the skills to use it wisely. It’s reminiscent of how early internet users had to learn to navigate online information, eventually understanding that while Wikipedia might be a good starting point for research, it shouldn’t be cited as a primary source. Just as digital literacy became essential for participating in modern democracy, AI literacy is becoming fundamental to understanding and shaping our future. At the heart of these AI mishaps are the hallucinations and distortions that lead AI models to generate false information with seeming confidence. The problem is pervasive: In one 2024 study, chatbots got basic academic citations wrong between 30% and 90% of the time, mangling paper titles, author names, and publication dates. While tech companies promise these hallucinations can be tamed through better engineering, De-Arteaga says researchers are finding that they may be fundamental to how the technology works. She points to a paper from OpenAIthe same company that partnered with Apple for news summarizationwhich concluded that well-calibrated language models must hallucinate as part of their creative process. If they were constrained to only produce factual information, they would cease to function effectively. “From a mathematical and technical standpoint, this is what the models are designed to do,” De-Arteaga says. Teaching literacy As researchers acknowledge that AI hallucinations are inevitable and humans naturally tend to put too much trust in machines, educators and employers are stepping in to teach people how to use these tools responsibly. California recently passed a law requiring AI literacy to be incorporated into K-12 curricula starting this fall. And the European Unions AI Act, which went into effect on February 5, requires organizations that use AI in their products to implement AI literacy programs. “AI literacy is incredibly important right now, especially as we’re trying to figure out what are the policies, what are the boundaries, what do we want to accept as the new normal,” says Victor Lee, an associate professor in the Graduate School of Education at Stanford University. “Right now, people who know more speak really confidently and are able to direct things, and there needs to be more societal consensus.” Lee sees parallels to how society adapted to previous technologies. “Think about calculatorsto this day, there are still divides about when to use a calculator in K-12, how much you should know versus how much the calculator should be the source of things,” he says. “With AI, we’re having that same conversation often with writing as the example.” Under California’s new law, AI literacy education must include understanding how AI systems are developed and trained, their potential impacts on privacy and security, and the social and ethical implications of AI use. The EU goes further, requiring companies that produce AI products to train applicable staff to have the “skills, knowledge and understanding that allow providers, deployers and affected persons . . . to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause. Both frameworks emphasize that AI literacy isn’t just technical knowledge but about developing critical thinking skills to evaluate AI’s appropriate use in different contexts. Amid a marketing onslaught by Big Tech companies, the challenge facing educators is complex. Recent research published in the Journal of Marketing shows that people with less understanding of AI are actually more likely to embrace the technology, viewing it as almost magical. The researchers say this lower literacy-higher receptivity link suggests that companies may benefit from shifting their marketing efforts and product development towards consumers with lower AI literacy. The goal isn’t to dampen openness to new technology, educators say, but to combine it with critical thinking skills that help people understand both AI’s potential and its limitations. Thats especially important for people who tend to lack access to the technology, or who are simply skeptical or fearful about AI. For Lee, successful AI literacy requires seeing through the magic. “The anxiety and uncertainty feeds a lot of the skepticism and doubt or non-willingness to even try AI,” he says. “Seeing that AI is actually a bunch of different things, and not a sentient, talking computer, and that it’s not even really talking, but just spitting out patterns that are appropriate, is part of what AI literacy would help to instill.” At the City University of New York, Luke Waltzer, director of the Teaching and Learning Center at the schools Graduate Center, is leading a project to help faculty develop approaches for teaching AI literacy within their disciplines. “Nothing about their adoption or their integration into our ways of thinking is inevitable,” Waltzer says. “Students need to understand that these tools have a material basisthey’re made by men and women, they have labor implications, they have an ecological impact.” The project, backed by a $1 million grant from Google, will work with 75 professors over three years to develop teaching methods that examine AI’s implications across different fields. Materials and tools developed through the project will be distributed publicly so other educators can benefit from CUNYs work. “We’ve seen the hype cycles around massively open online courses that were going to transform education,” Waltzer says. “Generative AI is distinct from some of those trends, but there’s definitely a lot o hype. Three years lets things settle. We will be able to see the future more clearly.” Such initiatives are spreading rapidly across higher education. The University of Florida aims to integrate AI into every undergraduate major and graduate program. Barnard College has created a “pyramid” approach that gradually builds students’ AI literacy from basic understanding to advanced applications. At Colby College, a private liberal arts college in Maine, students are beefing up their literacy with the use of a custom portal that lets them test and compare different chatbots. Around 100 universities and community colleges have launched AI credentials, according to research from the Center for Security and Emerging Technology, with degree conferrals in AI-related fields increasing 120% since 2011. Beyond the classroom For most people, learning to navigate AI means sorting through corporate marketing claims with little guidance. Unlike students who will soon have formal AI education, adults must figure out on their own when to trust these increasingly prevalent toolsand when they’re being oversold by companies eager to recoup massive AI investments. This self-directed learning is happening quickly: LinkedIn found that workers are adding AI literacy skills such as prompt engineering and proficiencies with tools like ChatGPT at nearly five times the rate of other professional skills. As universities and lawmakers try to keep up, tech companies are offering their own classes and certifications. Nvidia recently announced a partnership with California to train 100,000 students, educators, and workers in AI, while companies like Google and Amazon Web Services offer their own AI certification programs. Intel aims to train 30 million people in AI skills by 2030. In addition to free online AI skills courses offered by institutions like Harvard University and the University of Pennsylvania, people can also learn AI basics from companies like IBM, Microsoft, and Google. “AI literacy is like digital literacyit’s a thing,” De-Arteaga says. “But who should teach it? Meta and Google would love to be teaching you their view of AI.” Instead of relying on companies with a vested interest in selling you on AIs utility, Hare suggests starting with AI tools in areas where you have expertise, so you can recognize both their utility and limitations. A programmer might use AI to help write code more efficiently while being able to spot bugs and security issues that a novice would miss. The key is combining hands-on experience with guidance from trusted third parties who can provide unbiased information about AI’s capabilities, particularly in high-stakes areas like healthcare, finance, and defense. “AI literacy isn’t just about how a model works or how to create a dataset,” she says. “It’s about understanding where AI fits in society. Everyonefrom kids to retireeshas a stake in this conversation, and we need to capture all those perspectives.
Category:
E-Commerce
All news |
||||||||||||||||||
|