|
|||||
Just days after settling with the Department of Justice (DOJ), ticketing company Live Nation is again under fire after internal messages between employees revealed bragging about taking advantage of ticket buyers. In message exchanges from 2022, two regional directors of ticketing for Live Nation amphitheaters, Ben Baker and Jeff Weinhold, boasted about the prices they were able to get away with charging customers for ancillary fees, including things like parking, lawn chair rentals, and VIP access, with Baker writing, I gouge them on ancil prices. In one exchange, Weinhold shared how he was able to charge $250 for VIP parking at a venue. These people are so stupid, Baker replied. I almost feel bad taking advantage of them. In another series of messages, Baker says he charges customers $50 to park in the grass and $60 for closer grass. Robbing them blind baby, he added. Thats how we do it. The DOJs antitrust trial against Live Nation and Ticketmaster began this month, with the government alleging that Live Nations control of Ticketmaster was monopolizing the ticketing industry and leading to unfair pricing for consumers. Last week, Live Nation filed a request for the judge to exclude six sets of Baker and Weinholds messages from the trial, arguing that they would unfairly bias the jury. The DOJ and attorneys general for the states suing Live Nation opposed the request, and several media organizations later petitioned for the documents to be unsealed. On Monday, the DOJ and Live Nation reached a surprise settlement, letting the company retain ownership of Ticketmasterbut despite a legal win for Live Nation, the Baker and Weinhold messages have dealt another blow to the brands reputation. In a statement to Fast Company, Live Nation condemned Weinhold and Bakers conduct, adding that its own executives were unaware of the exchange prior to the trial documents being unsealed. The Slack exchange from one junior staffer to a friend absolutely doesn’t reflect our values or how we operate, reads the statement. Because this was a private Slack message, leadership learned of this when the public did, and will be looking into the matter promptly. A spokesperson for Live Nation emphasized that Baker and Weinholds behavior was against company policy, and that their pricing exceeded limits put in place to protect ticket buyers. We are digging into it now that we are aware, the spokesperson added. This is not at all an acceptable way to behave or talk, and important to note that these are not executives.
Category:
E-Commerce
At a recent AI summit in New Delhi, Sam Altman warned that early versions of superintelligence could arrive by 2028, that AI could be weaponized to create novel pathogens, and that democratic societies need to act before they are overtaken by the technology they have built. These concerns are widely shared across the industry. Geoffrey Hinton, the Nobel laureate known as the godfather of AI, has warned that creating digital beings more intelligent than ourselves poses a genuine existential threat. Mustafa Suleyman, CEO of Microsoft AI, devoted much of his book The Coming Wave to the argument that AIs fusion with synthetic biology could put the tools to engineer a deadly pandemic within reach of a single individual. These are not warnings about a distant future. Last week, a clash over who controls AI and on what terms led to a complete collapse in the companys relationship with the Pentagon. When politicians and business leaders try to make sense of issues like these, they are often tempted to look to the pharmaceutical industry for a regulatory model. Senator Richard Blumenthalone of the few legislators actively pushing for meaningful AI regulationhas proposed that the way the U.S. government regulates the pharmaceutical industry can serve as a model for AI oversight. The analogy makes intuitive sense. The pharma model shows that strict licensing and oversight of potentially dangerous emerging technologies can limit threats without placing undue restrictions on innovation. The instinctive attraction of this approach isnt confined to legislators. Many companies are applying the same logic internallywhether consciously or notmanaging AI risk through stage-gate reviews, pre-deployment testing, and post-launch monitoring. The pharma model, in other words, is already the de facto governance framework for much of the industry. The problem is that its the wrong frameworkand the differences are not just technical but existential. {"blockType":"mv-promo-block","data":{"imageDesktopUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/creator-faisalhoque.png","imageMobileUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/faisal-hoque.png","eyebrow":"","headline":"Ready to thrive at the intersection of business, technology, and humanity? ","dek":"Faisal Hoques books, podcast, and his companies give leaders the frameworks and platforms to align purpose, people, process, and techturning disruption into meaningful, lasting progress.","subhed":"","description":"","ctaText":"Learn More","ctaUrl":"https:\/\/faisalhoque.com","theme":{"bg":"#02263c","text":"#ffffff","eyebrow":"#9aa2aa","subhed":"#ffffff","buttonBg":"#ffffff","buttonHoverBg":"#3b3f46","buttonText":"#000000"},"imageDesktopId":91420512,"imageMobileId":91420514,"shareable":false,"slug":"","wpCssClasses":""}} Three disanalogies that matter Pharmaceutical regulation works because the barriers to entry are high, the product is physical and controllable, and the development cycle is slow enough for oversight to keep pace. None of these conditions hold for AI. First, barriers to entry are very different. Bringing a new drug to market costs an average of $1.1 billion, according to a 2020 study published in the Journal of the American Medical Association. The infrastructure alonelaboratories, clinical trial networks, manufacturing facilitieslimits production to a relatively small number of identifiable companies that regulators can monitor. AI has no equivalent friction. Capable models can be built for a fraction of that cost, fine-tuned on consumer hardware, and deployed globally from a laptop. The universe of actors a regulator would need to track is not a handful of identifiable companiesit is potentially anyone, anywhere. Second, a pharmaceutical product is physical. Manufacturing it requires raw materials, specialized equipment, and distribution logistics. All of this creates friction that regulators can exploit by imposing oversight checkpoints. But code has no such friction. Once released, an AI models weights can be copied number-for-number and shared across borders far more quickly than any physical weapon or industrial system. Its marginal cost of replication is effectively zero. And you cannot recall software the way you recall a contaminated drug. Once it is in the wild, it stays in the wild. Even capabilities that are delivered purely through access to the cloud are vulnerable to replication and thus to the breaking of corporate or regulatory guardrails. In just the last month, Anthropic disclosed that three Chinese AI labsDeepSeek, Moonshot, and MiniMaxhad used 24,000 accounts to generate over 16 million exchanges with Claude, extracting its most advanced capabilities through a technique called distillation. The Chinese labs did not need to infiltrate a supply chain or build expensive factories. They only needed API access and carefully crafted prompts, routed through proxy networks designed to evade detection. There is no pharmaceutical equivalent of this replicability. The final crucial disanalogy is speed. The pharma approval pipeline assumes that a product will go through years of controlled testing before it reaches the public. But AI models evolve on software timelines. Capabilities improve not only through hardware gains but through software updates, new training methods, and frequent model releases that can produce meaningful jumps in weeks rather than years. Anthropic, for instance, shipped two major Claude releases within ten weeks. The iteration cycle is so fast that by the time any pharma-style approval process could hope to evaluate a model, that model would already be obsolete replaced by something far more powerful for which the evaluation process had not even begun. Why test, deploy, monitor doesnt work The problem isnt confined to government. The same pharma-shaped thinking that distorts regulatory frameworks has taken root inside organizationsand it leaves them exposed for the same reasons. Pharma-type risks are familiar: a product might have harmful side effects, so you test it before deployment, monitor it afterward, and pull it back if something goes wrong. Even without an external regulator, many companies are applying this logic to AI internally, managing risk via the familiar means of stage-gate reviews, pre-deployment testing, and post-launch monitoring. It feels responsible. It feels sufficient. This is precisely the danger. Of course, stage-gate reviews and pre-deployment testing are not worthless. They catch real errors, enforce discipline, and create apaper trail that demonstrates due diligence to boards and regulators. Any organization that has implemented them is better off than one that has done nothing. But these frameworks create a false sense of coverage. The risk they manage is the risk they were designed forproduct defects, adverse effects, quality-control failures. AIs risk profile has a different shape entirely. It is defined by the potential for irreversibility, rapid proliferation, and misuse. Not every AI-driven outcome will trigger these risks. But unlike a defective product, you cannot issue a recall once the damage is done. This combination of potential threats means that the familiar toolkit of managed risk simply doesnt fitand organizations that believe it does are accepting exposures they havent mapped. It is precisely to meet these challenges that we developed the OPEN and CARE frameworks for managing AI innovation and risk. The CARE framework, in particular, provides a structured methodology for governing AI risk and is the foundation for the recommendations that follow. Build governance for AI risk The CARE framework works through four stages: Catastrophize, identifying what could go wrong; Assess, prioritizing those risks; Regulate, implementing controls; and Exit, planning for when those controls fail. Applied to your organizations AI exposure, the framework points toward five immediate actions. 1. Surface your shadow AI exposure. Ask your direct reports one question: what AI tools are you using that werent provided by the company? The answers will tell you how large the gap is between the AI your organization officially uses and the AI your people are actually relying on. 2. Map your irreversibility pointsand your fallbacks. Identify the AI-dependent processes where a failure would be irreversible or highly damaging, such as automated customer communications, AI-assisted code pushed to production, algorithmic hiring screens. Ask whether your current safeguards assume you can catch and correct errors before they reach the outside world. If they do, redesign themand build explicit fallback procedures for when they fail anyway. 3. Lock down your data exposure. Every AI tool your organization touches is a data pipeline running in both directions. Classify your data into tierspublic, internal, confidential, restrictedand map which AI tools are authorized for each tier. Audit your vendor agreements for training-data clauses. The moment proprietary data enters a third-party system, your ability to recall it is gone. 4. Red team for misuse, not just malfunction. Red teaming for malfunction asks What if this breaks? Red-teaming for misuse asks What if this works exactly as intended and someone uses it for the wrong purpose? As the CARE frameworks Catastrophize phase emphasizes, you need both. 5. Assign clear executive ownership. None of the above matters if accountability is diffused across committees. Designate a single executive who owns AI risk the way your CFO owns financial risk. That person needs authority, budget, and a direct line to the board. The real stakes For decades, pharma-style regulation has been one of the most successful bets in business: a framework that protects the public without strangling the industry. But the model is insufficient for AI. At the governmental level, serious people are reaching for serious solutions. Sam Altmans call at the New Delhi summit for an international regulatory body modeled on the International Atomic Energy Agency reflects a clearer-eyed view of what kind of technology this isone that demands oversight frameworks commensurate with its actual risk profile, not models borrowed from industries that dont share its characteristics. Business leaders should follow the same path. The category of problem that governments are grappling with at the international level is the same category of problem you are grappling with inside your organization. Design your governance accordinglyfor the technology you actually have, not the one you wish you were dealing with. {"blockType":"mv-promo-block","data":{"imageDesktopUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/creator-faisalhoque.png","imageMobileUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/faisal-hoque.png","eyebrow":"","headline":"Ready to thrive at the intersection of business, technology, and humanity? ","dek":"Faisal Hoques books, podcast, and his companies give leaders the frameworks and platforms to align purpose, people, process, and techturning disruption into meaningful, lasting progress.","subhed":"","description":"","ctaText":"Learn More","ctaUrl":"https:\/\/faisalhoque.com","theme":{"bg":"#02263c","text":"#ffffff","eyebrow":"#9aa2aa","subhed":"#ffffff","buttonBg":"#ffffff","buttonHoverBg":"#3b3f46","buttonText":"#000000"},"imageDesktopId":91420512,"imageMobileId":91420514,"shareable":false,"slug":"","wpCssClasses":""}}
Category:
E-Commerce
Welcome to AI Decoded, Fast Companys weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week via email here. AI pioneer pulls in a cool billion to launch his world model AI company Yann LeCun, one of the pioneers of AI and Metas former chief AI scientist, has long argued that large language models alone will not produce AI systems that outperform humans at most tasks. LeCun says todays transformer-based large language models are useful enough to be applied in valuable ways, but he also believes they are unlikely to achieve the general or human-level intelligence needed to perform many high-value tasks now reserved for human brains. He has found no shortage of AI commentators on X who disagree with him. Now he and his investors are placing a big bet that hes right. LeCuns new company, Advanced Machine Intelligence (AMI), says its building a new breed of AI systems that understand the world, have persistent memory, can reason and plan, and are controllable and safe. The company said Wednesday that it raised a $1.03 billion funding round from a group of investors including Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions. Former Google CEO Eric Schmidt and Tim Berners-Lee, who invented the World Wide Web, also threw in. AMI is likely to build models, or systems of models, that can train on a wider variety of data than todays LLMs. LeCun believes that AI systems need more than an understanding of words to truly understand and navigate the real world. They need to model the world in a very different wayone that starts with an ability to represent spatial data and develop a native understanding of physics. The AI would also need a very different architecture to structure all that high-bandwidth data. LeCun is in good company in this view: World Labs CEO Fei-Fei Li and UC Berkeley robotics lab director Pieter Abbeel are among those researching and building world models.) Even during his tenure at Meta, LeCun was working on (and writing papers about) these concepts. Now hell need to attract enough top research talent to flesh out those theories and build the models. Since LeCun is something like royalty in AI circles, I suspect hell attract the people he needs to take a good shot at functioning world models. A week after launch, OpenAIs GPT-5.4 is getting good reviews Generative models continue to improve, and the cadence of those improvements appears to be accelerating. Most recently, OpenAI released its newest model, GPT-5.4, which it says combines advances in reasoning, coding, and agentic workflows. Now that ChatGPT users and software developers have had a chance to try the model, some themes are emerging about its strengths and weaknesses relative to other frontier systems. My impression is that the reception has been mixed, based on comments from users, developers, and researchers on X. Many say the model is more project-oriented, meaning it is better able to understand and orchestrate general information work tasks, including those involving autonomous agents. On the other hand, some critics say GPT-5.4 is not a big enough leap forward in intelligence. Others argue the model is less adept at creative tasks, such as user interface design, than earlier GPT models. But most people would agree that GPT-5.4 is a big enough improvement to keep OpenAI at least on pace with its rival Anthropic, whose newest model, Claude Opus 4.6 got glowing reviewsespecially for the agentic improvements it brought to the Claude Code tool. Note that OpenAIs GPT-3.5-Codex model, launched in early February, brought similarly impressive improvements to OpenAIs Codex coding tool. The release of new versions of the base models now seem to affect the popularity of the consumer chatbots they power. After Google released its breakthrough Gemini 3 models last year, the Gemini chatbot saw big gains in usership. After Anthropics release of Opus 4.6 in February its Claude chatbot went to number one on the Apple Stores free apps ranking for the first time. After the release of GPT-5.4, the ChatGPT retook the number one spot. Tick-tock, Tick-tock. Its becoming clear that flagship AI models from the major labs are being built and trained to power agents, not just chatbots. That is, they are getting better at performing tasks rather than simply talking, whether that means operating a computer, researching on the web, or planning large projects. This shift from chatbots to agents will likely become more pronounced with future models, especially as the chatbot interface evolves to look more like a workspace. Amazon puts some organizational guardrails around AI coding tool use AI coding tools have had the most impact of any application of generative AI so far. They can dramatically speed up code production. But there are side effects. The Financial Times reported this week that Amazons AWS cloud division held a large meeting of its engineers after a series of service outages, at least two of which were reportedly caused by code alterations made by an AI coding tool, and one of which was linked to Amazons Kiro coding tool. Amazon says it will now require junior and mid-level engineers to obtain more senior-level sign-off for AI-assisted code changes. Since the explosion in the use of AI coding tools began last year, software engineers have been arguing about how much human oversight the tools require. The tools are improving, as are the AI models underneath them, but they still write code that ends up causing bugs, sometimes discovered long after the code was written. Amazon says its outages stemmed from user error rather than an AI failure. The company also said that AI coding tools can amplify existing engineering weaknesses such as weak safeguards, poor documentation, and bypassed review processes. Thats more than PR talk. Ive heard from a number of developers that engineers, especially younger ones, can lean too heavily on the tools, expect too much from them, and end up lowering their usual software development hygiene practices. I think we need to be clear that it is not magic, Replit CEO Amjad Masad said of coding tools during an interview last summer. The problem often leads to a lack of proper code validation, security testing, and documentation. I suspect that both the tools and their users will have to change. The tools must shift toward proactively pushing human engineers toward better testing and validation practices, while human coders will continue to learn what their AI coding partners can and cannot do. More AI coverage from Fast Company: ChatGP Edu feature reveals researchers project metadata across universities Googles Gemini AI wants to do the busywork in Docs and Sheets Anthropics Pentagon showdown is drawing Silicon Valley into a larger fight AI agents are coming for government. How one big city is letting them in Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||