|
|||||
If you don’t want to be left behind by the AI revolution, you really need to start paying for it. At least thats become the common refrain among some AI enthusiasts, who seem intent on instilling FOMO in less technical users. The free versions of ChatGPT and Claude, they say, are woefully inadequate if you want to understand where things are headedso stop being a cheapskate and hand over your $20 (or $200) a month like the rest of us. “Judging AI based on free-tier ChatGPT is like evaluating the state of smartphones by using a flip phone,” HyperWrite CEO Matt Shumer recently wrote in a widely shared essay on AI’s impact. “The people paying for the best tools, and actually using them daily for real work, know what’s coming.” I’m giving you permission to safely ignore this advice, and to not feel bad about it. While an AI subscription might make sense if you’re running into specific frustrations with the free versions, you can still get plenty of mileage without paying, and learn a lot about the state of AI in the process. Don’t be frightened into buying something that hasn’t actually proven its value to you. The state of the art is still free One way that AI boosters try to scare you into paying for AI is by arguing that the free versions are already obsolete, so any negative impressions you might’ve gotten from them are misguided. “Part of the problem is that most people are using the free version of AI tools,” Shumer wrote in his essay. “The free version is over a year behind what paying users have access to.” This claim is provably false: The free version of ChatGPT includes access to GPT-5.2, OpenAI’s latest model, which launched in December. The free version of Google Gemini includes access to Gemini Pro 3.1, which launched on February 19. Claude’s free version doesn’t include Opus 4.6, but has the same Sonnet 4.6 model that the paid version uses by default. It launched on February 17. Microsoft 365 subscribers can also select “Smart Plus” in Copilot to use GPT-5.2, without a premium AI subscription. xAI’s Grok 4 is available for free. Of course, the free versions of these tools all have usage limits, but so do the paid ones. When I signed up for a month of Claude Pro to test Opus 4.6, I quickly ran into yet another paywall. To continue the conversation, I had to either buy pay-as-you-go credits or upgrade to the $200-a-month Claude Max plan. Without paying more, I couldnt use Claude at allnot even Sonnet 4.5until my limit reset. My main takeaway was that I should have just stuck with Sonnet in the first place. Instead of paying for some vague feeling that you’re getting the state of the art, you should play around with what AI companies offer for free. Make them demonstrate that the results are meaningfully different before you consider paying them, not after. AI should prove itself to you, not vice versa For AI boosters, the corollary to paying for AI is that you also need to throw immense amount of time into figuring out what it’s for. Ethan Mollick, for instance, writes that you should “resign yourself to paying the $20 (the free versions are demos, not tools),” then spend the next hour testing it on various real-world tasks. Sorry, but this is backward from how software as a service should work. It’s not your job to invest time and money into convincing yourself that AI is worth more time and money. Let the AI companies do the convincing, and don’t fall prey to FOMO in the meantime. Playing the field is just as instructive If you do commit to paying for an AI tool, chances are you won’t use other AI tools as much, or at all. But that in itself isn’t a great way to understand the state of AI. What you should be doing instead is bouncing around, taking full advantage of what each AI company offers for free. That way, you’ll get a sense not just of the subtle differences between large language models, but also the unique features that each AI tool offers. You’ll also be less likely to run into usage limits, the only trade-off being that your past conversations will be scattered across a few different services. Such behavior is, of course, wildly unprofitable for all the companies involved. But again, that’s not your problem. If you’re getting sufficient value out of free AI tools, the AI companies will have to tweak their free offerings accordingly (for instance, with ads) or come up with new features worth paying for. Claude Code, for instance, is available only with a subscription, and over time we may see more paywalled tools (like Claude Cowork, which is still in early development) that cater to specific tasks or verticals. Until that happens, enjoy the free versions of AI tools, and rest easy knowing that you’re not missing much.
Category:
E-Commerce
Its the last week of Black History Month (BHM) and its clear Americans are over performative values. Trite BHM-inspired merchandise sits on retailer shelves untouched while media is abuzz covering the artistry, activism, and symbolism of Bad Bunnys Super Bowl halftime show. The signal is clear: consumers are looking to brands for real solutions to real problems, not products that commodify culture. Most companies build everything from advertising to AI for the “average user,” but in doing so, they react to rather than lead markets. Strategic leaders look to growth audiencesunderserved groups who are the fastest-growing demographicsas lead users. They are the “canaries in the coal mine” because they navigate the highest levels of systemic friction, making them the first to experience “average” design failures. What does championing these lead users look like at a communications, product, or systems level? It looks like Elijah McCoy automating engine lubricationan innovation bred from the friction between his engineering degree and the menial labor he was forced to perform, thus creating the real McCoy quality standard. It looks like Jerry Lawson changing the economics of the gaming industry by inventing the video game cartridge that divorced its hardware from its software. And it looks like emergency medicine becoming a global standard after being piloted by the Pittsburgh Freedom House Ambulance Service who, in the face of medical bias and systemic unemployment, also redefined emergency care as a public right. Drawing from their lived experiences in underserved groups, these pioneers didnt just solve problems; they mastered environmental friction. Today, that friction also manifests in algorithms. Championing growth audiences as lead users means ensuring they are critical AI system “stress testers.” When we fail to design for them, we allow AI data, development, and deployment to default to obtuse “averages” that can frustrate or drive away valuable customers. Three recent examples highlight issues and opportunities. Relying on ‘Data Infallibility’ versus Lived Realities In this Infallibility Loop bias, a brands AI trusts a data sourcelike a flawed GPS coordinate or outdated government mapas an absolute truth, even when customers provide contrary evidence. This is a digital echo of historical redlining: a systemic refusal to see humans over faulty data. The Experience: A Black homeowner in an affluent area is penalized by an AI that confuses her address with a property in a different town, automatically forcing unnecessary flood insurance onto her mortgage and increasing the payments. Despite providing human-verified deeds and highlighting known GPS errors, the AI blocks her incomplete payments and triggers automated credit hits. A resolution only came months later after the consumer filed state-level servicer complaints. The Fix: Prioritize Dynamic Qualitative Data Collection. Design should allow real-time, contextual evidence to override static, biased datasets. True brand innovation requires systems to yield to the experts: their customers. Leveraging ‘Data Intimacy’ while Neglecting Situational Accuracy This trust paradox occurs when brands use private data, but fail to combine situational data, making personalization feel like needless surveillance. The Experience: During Januarys recent record-breaking New York snowstorm, a customer called a national pharmacys location in her neighborhood to make sure they were open. The AI-powered interactive voice response (IVR) recognized her number, asked for her birthdate, and greeted her by name. Yet, after performing this exchange, it provided a “default” confirmation that the store was open when asked. Without a car, the customer braved life-threatening conditions on foot only to find a handwritten note on the door indicating it had closed due to the storm. The Fix: Add Good Friction. A term coined by MIT professor Renee Richardson Gosline, “Good Friction” requires that when external context (like a Level 5 storm) conflicts with standard scripts, the system pauses and verifies first. Prioritizing ‘Recency’ But Erasing Loyalty Recency bias in algorithms weights the last data point more heavily potentially resulting in algorithmic erasure. The Experience: A 20-year elite status customer calls an airline, only to be greeted by the name of his niece (a nonmember relative for whom he recently booked a one-off ticket) and then is erroneously deprioritized in the automated journey as a nonmember. In many “growth audience” and immigrant households, economics are multigenerational and communal, with a single “lead user” facilitating purchases for extended family. This airline systems “memory” was shallow, seeing only the most recent transaction and ignoring a decades-long relationship because a reservation shared the same contact number. The Fix: Focus on Holistic Design. AI must be weighted to recognize the arc of the customer journey, ensuring that loyalty isnt erased by a single data point or the nuances of communal purchasing. To be sure, bad data is a universal problem, but the lack of situational intelligence in our AI systems hits growth audienceslike Black consumersfirst and hardest. Because these audiences represent a disproportionate share of future consumption and have the most “cultural common denominators,” their frictions are diagnostics for markets writ large. We arent just solving for a niche by championing them as lead users, we are adopting more rigorous, empathetic, expansive, and effective standards that solve real problems for all people.
Category:
E-Commerce
At hundreds of Burger King restaurants across the U.S., theres a new invisible worker whos tracking which ingredients are in stock, analyzing daily sales data, and checking in on whether employees are saying Thank you and Youre welcome. Its an AI assistant named Patty. According to Thibault Roux, Burger Kings chief digital officer, the voice-activated chatbot is designed to help employees and managers handle tasks that might usually require pulling out a computer or consulting with an instruction guide. Patty began showing up at select locations about a year ago, and is now in a pilot phase at approximately 500 Burger Kings. Its expected to roll out to the rest of the chains U.S. locations by the end of the year. On a day-to-day basis, Patty has an array of functions, from letting a manager know if a store is low on onions to helping an employee build a new burger. But it has another role thats raising quite a few eyebrows: analyzing Burger King locations based on friendliness by tracking employees use of key phrases like Welcome to Burger King, Please, and Thank you. Online, commenters are concerned that this functionality is a slippery slope toward 1984-style employee surveillance. In an interview with Fast Company, though, Roux clarified that Patty is not being used to analyze individual employees performance, and is instead imagined as a kind of coach. It’s truly meant to be a coaching and operational tool to really help our restaurants manage complexities and stay focused on a great guest experience, Roux says. Guests want our service to be more friendly, and that’s ultimately what we’re trying to achieve here. Patty, are we running low on Diet Coke? Technically, Patty is the chatbot version of Burger Kings assistant platform, which collects data from operations including drive-through conversations, inventory, and sales, and then uses AI to analyze patterns in that data. For now, Patty operates on a customized model from OpenAI, though Roux says the technology is flexible enough that it could integrate with another partner in the future (like Anthropic or Gemini) depending on the companys needs. For managers and employees in stores, Roux says Patty operates similarly to something like Siri. Patty is activated by a small button on the side of an employees headset, and they can ask it direct verbal questions related to their specific storelike recent sales figures or inventory updatesas well as more general company information, to which the bot will provide a verbal answer. If you’re looking to clean the shake machine [you can ask Patty] the procedures to clean it, Roux explains. Or we have a lot of limited-time offers, and sometimes they can be cumbersome to remember. You can easily tap into Patty and be like, Hey, remind me, does the new build maple bourbon barbecue have crispy jalapeos? Patty can also reach out to employees directly if it notices a pattern of interest. For example, if Patty thinks a specific store is out of lettuce, it might ping a manager to confirm. Once its received confirmation, it can mark lettuce as sold out on that locations app and websitea process that previously would have required human intervention. Roux says franchisees and regional managers can decide how they want Patty to reach employees with information, whether its through their headsets or via a text message (though the tech is programmed explicitly to never interrupt a worker during a customer interaction). Insights from Burger Kings Assistant platform also live outside of employees headsets. Managers can check information from the tool on an accompanying website or app. For example, Roux says, when a district manager is visiting a new store, they might ask Patty on the app, What are the top three guest complaints at this location this week? or What are their top missing items? In an interview with Fast Company writer Jeff Beer earlier this month, Burger King President Tom Curtis said the assistant platform has already led to some significant menu changes. Curtis explained that the AI tracked all the times that team members said Im sorry, we dont have that and linked them back to a common denominator: apple pie. In January, Burger King brought back its apple pie for the first time since 2020. Were in the idiocracy version of 1984 Pattys more straightforward uses, like helping managers access sales data and check inventory, seem fairly predictable in the context of fast food. Where Burger King is really pushing Pattys use cases, though, is with its friendliness metric. In an interview with The Verge on February 26, Roux said Patty would recognize phrases like Welcome to Burger King, Please, and Thank you, and then give managers access to data on their locations friendliness performance based on those keywords. Mere hours after that piece went live, a thread in the subReddit r/technology on Patty had already amassed more than 15,000 upvotes and nearly 3,000 comments. Common refrains from users include comparing the technology to the surveillance state in George Orwells novel 1984, labeling it authoritarian and dystopian, and accusing Burger King of employee surveillance. “This would be criticized as being cartoonishly unrealistic in a sci-fi movie 10 years ago,” one user wrote. Another added, “We’re in the idiocracy version of 1984.” When asked about this response, Roux says the data from employees conversations is anonymized, and that none of these friendliness metrics will be used for grading or assessing individuals. Further, he adds, Patty will not directly instruct employees on what to say or how to say it. Instead, data on friendliness will be shared with managers, who can use it for face-to-face coaching with their teams. Still, its unclear exactly how Patty is quantifying friendliness. In a video explanation of the feature, a manager is shown asking the bot, Is there anything that needs my immediate attention? to which it responds, The teams friendliness scores this morning were the highest this week. In an email to Fast Company, a Burger King spokesperson said, In select pilot locations, weve explored using aggregated keywords, including common hospitality phrases, as one of several signals to help managers understand overall service patterns. The tool is not used to score individuals or enforce scripts. Burger King did not respond to ast Companys request for clarification on how friendliness scores are calculated. So far, Roux says hes seen growing interest in Patty from franchisees, with several managers making specific requests for future add-ons. A lot of our franchisees . . . and regional general managers are very competitive, so they want to know, Hey, how do I compare to other restaurants? Roux says. I think that’s something that we’re going to be rolling out. In fact, we were looking at some of the designs earlier this week with the franchisees. So this is only the beginning.
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||