|
Welcome to AI Decoded, Fast Companys weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here. Why OpenAIs new open-weight models matter OpenAI is opening up again. The companys release of two open-weight modelsgpt-oss-120b and gpt-oss-20bthis month marks a major shift from its 2019 pivot away from transparency, when it began keeping its most advanced research under wraps after a breakthrough in model scaling and compute. Now, with GPT-5 on the horizon, OpenAI is signaling a returnat least in partto its original ethos. These new models come with all their internal weights exposed, meaning developers can inspect and fine-tune how they work. That doesnt make them open-source in the strictest sensethe training data and source code remain closedbut it does make them more accessible and adaptable than anything OpenAI has offered in years. The move matters, not just because of the models themselves, but because of whos behind them. OpenAI is still the dominant force in generative AI, with ChatGPT as its flagship consumer product. When a leader of that stature starts releasing open models, it sends a signal across the industry. Open models are here to stay, says Anyscale cofounder Robert Nishihara. Now that OpenAI is competing on this front, the landscape will get much more competitive and we can expect to see better open models. Enterprisesespecially ones in regulated industries like healthcare or financiallike to build on open-source models so that they can tailor them for their needs, and so they can run the models on in-house servers or in private clouds rather than undertaking the high cost and security risks of sending their (possibly sensitive or proprietary) data out to a third-party LLM such as OpenAIs GPT-4.5, Anthropics Claude, or Googles Gemini. OpenAIs oss models are licensed under Apache 2.0, meaning developers can use, modify, and even commercialize them, as long as they credit OpenAI and waive any patent claims. None of that would matter if the models werent state of the art, but they are. The larger gpt-oss-120b (120 billion parameters) model matches OpenAIs o4-mini on core reasoning benchmarks while running on a single graphics processing unit (GPU), OpenAI says. The smaller gpt-oss-20b model performs on par with the companys o3-mini, and is compact enough to run on edge devices with just 16 GB of memory (like a high-end laptop). That small size matters a lot. Many in the industry believe that small models running on personal devices could be the wave of the future. On-device models, after all, dont have to connect to the cloud to process data, so they are more secure and can keep data private more easily. Small models are also often trained to do a relatively narrow task (like quality inspection in a factory or language translation from a phone). The release could also accelerate the broader ecosystem of open AI infrastructure. The more popular open models become, the more important open-source infrastructure for deploying those models becomes, Nishihara says. Were seeing the rise of open models complemented by the emergence of high-quality open-source infrastructure for training and serving those modelsthis includes projects like Ray and vLLM. Theres also a geopolitical subtext. The Trump administration has increasingly framed AI as a strategic asset in its rivalry with China, pushing American companies to shape global norms and infrastructure. Open-weight models from a top U.S. labbuilt to run on Nvidia chipscould spread quickly across regions like Africa and the Middle East, countering the rise of free Chinese models tuned for Huawei hardware. Its a soft-power play, not unlike the U.S. dollars dominance as a global currency. Googles new Genie 3 world models could wild new forms of gaming, entertainment With the right prompt, AI models can generate words, voices, music, images, video, and other things. And the quality of those generations continues to grow. Google DeepMind has pushed the boundaries even further with its world models, capable of generating live, interactive environments that users can navigate and modify in real time. Words alone dont fully capture the capabilities of DeepMinds new Genie 3 model. A demo video shows a number of lifelike worlds (a desert, a scuba diving scene, a living room, a canal city, etc.) At one point, the user adds a whimsical element to the canal city world by writing the prompt: A man in a chicken suit emerges from the left of the shot and runs down the towpath hugging the wall. And the man in the chicken suit immediately appears in the world. Then, the user drops a dinosaur into the nearby canal. Splash. The most obvious application of this kind of AI is in gaming, where a model could generate an endless stream of environments and game scenarios for the gamer. Its a natural research focus for DeepMind, which focused its early AI research on video game environments. The potential for world modeling is enormous. Future versions of the Genie model could enable choose your adventure experiences in video or AR formats, where storytelling adapts dynamically to the viewers preferences, interests, and impulses. As Google notes, companies working on self-driving cars or robotics could also benefit, using these models to simulate real-world conditions that would be costly or impractical to recreate physically. The AI industry responds to AI tool abuse by students As the new school year approaches, educators and parents continue to worry that students are using AI tools to do their schoolwork for them. The danger is that students can rely heavily on AI t generate answers to questions, while failing to learn all the contextual stuff they would encounter during the process of finding answers on their own. A growing body of research suggests that relying on AI harms overall academic performance. Now OpenAI and Google have each responded to this worrisome situation by releasing special study modes inside their respective AI chatbots. OpenAIs tool is called ChatGPT study mode, while Google offers a similar feature within its Gemini chatbot called Guided Learning. The tools format and features seem remarkably similar. Both break down complex problems into smaller chunks and then walk the student through them using a question-and-answer approach. Google says its questions are designed to teach students the how and why behind a topic, encouraging learning throughout the exchange. OpenAI says its tool uses Socratic questioning, hints, and self-reflection prompts to guide understanding and promote active learning. Both OpenAI and Google say that the teaching approach and format are based on research by learning experts. Still, the student is ultimately in control of what AI tools they use. OpenAI says that users can easily toggle between regular chatbot mode and the study mode. Google says it believes students need AI for both traditional question searches and for guided study. So these new learning tools may provide an alternative mode of learning using AI, but theyre not likely to significantly shift the argument around AIs threat to real learning. More AI coverage from Fast Company: Google wants you to be a citizen data scientist Reviving this government agency could be the key to U.S. tech dominance Cloudflare vs. Perplexity: a web scraping war with big implications for AI What the White House Action Plan on AI gets right and wrong about bias Want exclusive reporting and trend analysis on technology, business innovation, the future of work, and design? Sign up for Fast Company Premium.
Category:
E-Commerce
U.S. state legislatures are where the action is for placing guardrails around artificial intelligence technologies, given the lack of meaningful federal regulation. The resounding defeat in Congress of a proposed moratorium on state-level AI regulation means states are free to continue filling the gap. Several states have already enacted legislation around the use of AI. All 50 states have introduced various AI-related legislation in 2025. Four aspects of AI in particular stand out from a regulatory perspective: government use of AI, AI in health care, facial recognition and generative AI. Government use of AI The oversight and responsible use of AI are especially critical in the public sector. Predictive AIAI that performs statistical analysis to make forecastshas transformed many governmental functions, from determining social services eligibility to making recommendations on criminal justice sentencing and parole. But the widespread use of algorithmic decision-making could have major hidden costs. Potential algorithmic harms posed by AI systems used for government services include racial and gender biases. Recognizing the potential for algorithmic harms, state legislatures have introduced bills focused on public sector use of AI, with emphasis on transparency, consumer protections and recognizing risks of AI deployment. Several states have required AI developers to disclose risks posed by their systems. The Colorado Artificial Intelligence Act includes transparency and disclosure requirements for developers of AI systems involved in making consequential decisions, as well as for those who deploy them. Montanas new Right to Compute law sets requirements that AI developers adopt risk management frameworksmethods for addressing security and privacy in the development processfor AI systems involved in critical infrastructure. Some states have established bodies that provide oversight and regulatory authority, such as those specified in New Yorks SB 8755 bill. AI in health care In the first half of 2025, 34 states introduced over 250 AI-related health bills. The bills generally fall into four categories: disclosure requirements, consumer protection, insurers use of AI and clinicians use of AI. Bills about transparency define requirements for information that AI system developers and organizations that deploy the systems disclose. Consumer protection bills aim to keep AI systems from unfairly discriminating against some people, and ensure that users of the systems have a way to contest decisions made using the technology. Numerous bills in state legislatures aim to regulate the use of AI in health care, including medical devices like this electrocardiogram recorder. VCG via Getty Images Bills covering insurers provide oversight of the payers use of AI to make decisions about health care approvals and payments. And bills about clinical uses of AI regulate use of the technology in diagnosing and treating patients. Facial recognition and surveillance In the U.S., a long-standing legal doctrine that applies to privacy protection issues, including facial surveillance, is to protect individual autonomy against interference from the government. In this context, facial recognition technologies pose significant privacy challenges as well as risks from potential biases. Facial recognition software, commonly used in predictive policing and national security, has exhibited biases against people of color and consequently is often considered a threat to civil liberties. A pathbreaking study by computer scientists Joy Buolamwini and Timnit Gebru found that facial recognition software poses significant challenges for Black people and other historically disadvantaged minorities. Facial recognition software was less likely to correctly identify darker faces. Bias also creeps into the data used to train these algorithms, for example when the composition of teams that guide the development of such facial recognition software lack diversity. By the end of 2024, 15 states in the U.S. had enacted laws to limit the potential harms from facial recognition. Some elements of state-level regulations are requirements on vendors to publish bias test reports and data management practices, as well as the need for human review in the use of these technologies. Porcha Woodruff was wrongly arrested for a carjacking in 2023 based on facial recognition technology. AP Photo/Carlos Osorio Generative AI and foundation models The widespread use of generative AI has also prompted concerns from lawmakers in many states. Utahs Artificial Intelligence Policy Act requires individuals and organizations to clearly disclose when theyre using generative AI systems to interact with someone when that person asks if AI is being used, though the legislature subsequently narrowed the scope to interactions that could involve dispensing advice or collecting sensitive information. Last year, California passed AB 2013, a generative AI law that requires developers to post information on their websites about the data used to train their AI systems, including foundation models. Foundation models are any AI model that is trained on extremely large datasets and that can be adapted to a wide range of tasks without additional training. AI developers have typically not been forthcoming about the training data they use. Such legislation could help copyright owners of content used in training AI overcome the lack of transparency. Trying to fill the gap In the absence of a comprehensive federal legislative framework, states have tried to address the gap by moving forward with their own legislative efforts. While such a patchwork of laws may complicate AI developers compliance efforts, I believe that states can provide important and needed oversight on privacy, civil rights and consumer protections. Meanwhile, the Trump administration announced its AI Action Plan on July 23, 2025. The plan says The Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations . . . The move could hinder state efforts to regulate AI if states have to weigh regulations that might run afoul of the administrations definition of burdensome against needed federal funding for AI. Anjana Susarla is a professor of information systems at Michigan State University. This article is republished from The Conversation under a Creative Commons license. Read the original article.
Category:
E-Commerce
First things first: Whenever possible, science says don’t have so many meetings. Here’s why: A meta-analysis of more than a decade of research shows employee productivity increases by more than 70% when the number of meetings is reduced by 40%. A study published in Journal of Organizational Behavior found that meetings that start late don’t just waste time: Meetings that start 10 minutes late are one-third less effective in terms of both actual and perceived outcomes than meetings that start on time. A study published in Transcripts of the Royal Society of London found that people placed in small groups asked to solve problems experience an individual IQ drop of approximately 15%. Walk into a meeting, instantly get dumber. So yeah: Stop having so many meetings. (Besides: A full calendarespecially a calendar full of meetingsis never a proxy for productivity.) But what if you really need to have a meeting? How can you make that meeting as focused and productive as possible? Borrow a move from Oprah Winfrey’s leadership tool kit. Start with intention Brendon Burchard, the author of High Performance Habits: How Extraordinary People Become That Way, says Oprah starts every meeting by asking three questions: What is our intention for this meeting? Whats important? What matters? The premise behind that approach is simple. High performers constantly seek clarity. (And employees who aren’t high performersyetneed clarity.) They work hard to sift out distractions so they can focus and continually refocus on what is important. Clarity? It isn’t something you get. Clarity is something you have to seek: You gain clarity, and focus, only when you actively search for them. Keep in mind the same holds true on a personal level. Successful people dont wait for an external trigger to start making changes. Successful people dont wait until New Years, or until Monday, or until the first of the month; they decide what changes they want to make and they get started. Now. Thats why no meeting agenda should include words like recap, information, review, or discussion. Bringing everyone up to speed, whether formally stated as an intention or not, is a terrible reason to have a meeting. And if information is required to make a decision during a meeting, share it ahead of time. Send documents, reports, etc., to participants in advance. Good meetings result in decisions. What. Who. When. Clear direction. Clear actions. Clear accountability. And stick to that intention That’s why the most productive meetings typically have one-sentence agendas: “Set product launch date.” “Select supplier.” “Determine roll-out responsibilities.” Those agendas are much easier to accomplish when you start a meeting the right way: by clearly stating intentions, and then sticking to those intentions. Try it. The next time you hold a meeting, kick it offon timeby answering the three questions for the group. State the intention. Explain why it’s important. Explain why it matters. If you find yourself in a meeting that’s drifting, help everyone focus by asking the three questions. Ask what you’re really trying to accomplish. Determine why it’s important, and why it matters. While it might feel awkward, everyone in the meeting will thank you for it. Because no one likes an unproductive meeting. And nor should you. By Jeff Haden This article originally appeared in Fast Companys sister publication, Inc. Inc. is the voice of the American entrepreneur. We inspire, inform, and document the most fascinating people in business: the risk-takers, the innovators, and the ultra-driven go-getters that represent the most dynamic force in the American economy.
Category:
E-Commerce
The Trump administration has been talking to drugmakers about ways to raise prices of medicines in Europe and elsewhere in order to cut drug costs in the United States, according to a White House official and three pharmaceutical industry sources. U.S. officials told drug companies it would support their international negotiations with governments if they adopt “most favored nation” pricing under which U.S. drug costs match the lower rates offered to other wealthy countries, the White House official said. The U.S. is currently negotiating bilateral trade deals and setting tariff rates on the sector. The Trump administration has asked some companies for ideas on raising prices abroad, two of the sources said, describing multiple meetings over several months aimed at lowering U.S. prices without triggering cuts to research and development spending drugmakers insist would result. The White House official called the effort collaborative, saying both sides were seeking advice from each other. The U.S. pays more for prescription drugs than any other country, often nearly three times as much as other developed nations. President Donald Trump has repeatedly said he wants to narrow this gap to stop Americans from being “ripped off.” The previously unreported discussions reflect the challenges Trump faces to achieve that goal, and are the backdrop to the letters he sent last week to CEOs of 17 major drugmakers, urging them to cut U.S. prices to match those paid overseas. Unlike in the U.S., where market forces determine drug prices, European governments typically negotiate directly with companies to set prices for their national healthcare systems. Anna Kaltenboeck, a health economist at Verdant Research, said European nations have leverage to drive pricing and are sometimes willing to walk away from purchasing medicines they deem too expensive. Drugmakers generate most of their sales in the U.S. The Pharmaceutical Research and Manufacturers of America the industry’s main lobby group has always argued that cutting U.S. prices would stifle innovation by lowering R&D spending. PhRMA declined to comment on the private meetings. Kaltenboeck said past studies had shown that drugmakers made enough money in the U.S. to more than fund their entire global R&D spends. “Prices can come down in the United States without being increased in other countries, and we can still get innovation,” she said. TOP PRIORITY Despite the Trump administration’s tariff threats and pressure to move more manufacturing to the U.S., the push to raise European drug prices is its top priority in discussions with industry, according to a senior executive at a European drugmaker, who spoke on condition of anonymity about the confidential meetings. “This is the key conversation right now with PhRMA and every company getting that message from Pennsylvania Avenue to a point that we are already executing on it,” the executive said, referring to the White House address. The company had already met with European governments on the issue, the executive added. An E.U. Commission spokesperson said it is in regular contact with the pharma industry and pointed to an agreement with the U.S. that should it impose tariffs on pharmaceuticals, they would be capped at 15%. When asked how the administration would support international drug price negotiations, the White House official referred Reuters to Trump’s most favored nation executive order from May. That order directed trade officials to pursue trade and legal action against countries keeping drug prices below fair market value. In last week’s letters, Trump complained that since the May executive order, most industry proposals had simply shifted blame for high prices or requested policy changes that would result in billions in industry handouts. A second source, a pharmaceutical executive who was not authorized to speak on the matter, said the Trump administration has been continually meeting with representatives of his company and had discussed strategies for raising drug prices internationally. “There’s a big push from the administration to drive up prices outside the U.S.,” the executive said. The executive said the Trump administration had been looking at using trade talks with the UK and EU as leverage, and considered pressuring countries to spend a higher percentage of GDP on new medicines or offering tariff breaks in exchange for higher drug spending. It was understood that the UK deal specifically aims to get the country to ramp up investment in branded medicines over time, the executive said. A spokesperson for the UK government said it would continue to work closely with the U.S. and its own pharmaceutical industry to understand the possible impact of any changes to drug pricing, without commenting on the trade talks. In April, over 30 industry CEOs including those from AstraZeneca, Bayer and Novo Nordisk signed a letter to European Union President Ursula von der Leyen saying Europe needed to rethink its pricing policies. “It’s going to be very difficult for a country that already has the ability to control what it spends to go in the other direction,” Kaltenboeck said, “and it doesn’t make much sense for them politically.” Patrick Wingrove and Maggie Fick, Reuters
Category:
E-Commerce
Japanese technology conglomerate SoftBank Group Corp. posted a 421.8 billion yen ($2.9 billion) profit in the April-June quarter, rebounding from a loss a year earlier as its investments benefited from the craze for artificial intelligence. Quarterly sales at Tokyo-based SoftBank Group, which invests heavily in AI companies like Nvidia and OpenAI, rose 7% to 1.8 trillion yen ($12 billion), the company said Thursday. SoftBank’s loss in April-June 2024 was 174 billion yen. The company’s fortunes tend to fluctuate because it invests in a range of ventures through its Vision Funds, a move that carries risks. The groups founder, Masayoshi Son, has emphasized that he sees a vibrant future in AI. SoftBank has also invested in Arm Holdings and Taiwan Semiconductor Manufacturing Co. Both companies, which produce computer chips, have benefitted from the growth of AI. The era is definitely AI, and we are focused on AI, SoftBank senior executive Yoshimitsu Goto told reporters. An investment company goes through its ups and downs, but we are recently seeing steady growth. Some of SoftBank’s other investments also have paid off big. An example is Coupang, an e-commerce company known as the Amazon of South Korea, because it started out in Seoul. Coupang now operates in the U.S. and other Asian nations. Goto said preparations for an IPO for PayPay, a kind of cashless payment system, were going well. The company has already held IPOs for Chime, a U.S. neobank that provides banking services for low-credit consumers, and for Etoro, a personal investment platform. SoftBank Group stock, which has risen from a year ago, finished 1.3% higher on the Tokyo Stock Exchange after its earnings results were announced. Yuri Kageyama, AP business writer
Category:
E-Commerce
Sites : [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] [48] next »