|
|||||
Spend a few minutes on developer Twitter and youll run into it: vibe coding. With a name like that, it might sound like a passing internet trend, but its become a real, visible part of software culture. Its shorthand for letting AI generate code from simple language prompts instead of writing it manually. In many ways, its great. AI has lowered the barrier to entry for coding, and thats pulled in a wave of hobbyists, designers, and side-project tinkerers who might never have touched a codebase before. Tools like Warp, Cursor, and Claude Code uplevel even professional developers, making it possible to ship something working in hours instead of weeks. But heres the flip side: when AI can move faster than you can think, its easy to run straight past the guardrails. Weve already seen how that can go wrong, like with the recent Tea app breach, which shows even polished, fully tested code can hide critical vulnerabilities if humans dont review it thoroughly. Optimizing for speed over clarity lets AI produce something that works in the moment, but without understanding it, you cant know what might break later. This isnt just technical debt anymore; its a risk to customer trust. The instinctive reaction to solve this trade-off is to throw more tech at the problem: add automated scans, add a secure by default setting. Those things matter. But Id argue that failure in vibe coding doesnt start with tooling, it starts with leadership. If you dont lead your team through this new way of working, theyll either move too slow to benefit from AI or move so fast they start breaking things in ways a security checklist cant save you from. The real job is steering, not slowing down When we built agentic coding agent Warp 2.0, we put a simple mandate in place: Use Warp to build Warp. That means every coding task started with prompting an AI agent. Sometimes it nailed it in one shot; sometimes we had to drop back to manual coding. But the point wasnt dogma, it was to force us to learn, as a team, how to work in an agent-driven world. We learned quickly that more AI doesnt automatically mean better. AI can write a thousand lines of plausible-looking code before youve finished your coffee. Without structure, thats a recipe for brittle, unmaintainable systems. The real challenge was getting people to treat AI-generated code with the same discipline as code they wrote themselves. Thats a leadership problem. Its about setting cultural norms and making sure they stick. Three things leaders need to get right 1. Hold developers accountable The biggest mental trap is treating the AI as a second engineer who owns what it wrote. It doesnt. If someone contributes code to a project, they own that code. They need to understand it as deeply as if they typed it out line by line. AI wrote it should never be an excuse for a bug. Leaders cant just say this once; they have to model it. When you review code, ask questions that make it clear you expect comprehension, not just functionality: Why does this query take so long to run? What happens if the input is null? Thats how you set the standard that understanding is part of shipping. 2. Guide AI with specifics Using large, one-shot prompts is like cooking without tasting as you go: sometimes it works, but usually its a mess. AI is far more effective when you request small, testable changes and review them step by step. Its not just about quality, it also builds a feedback loop that helps your team get better at prompting over time. In practice, this means teaching your team to guide the AI like theyd mentor a junior engineer: explain the architecture, specify where tests should live, and review work in progress. You can even have the AI write tests as it goes as one way to force smaller, verifiable units of work. 3. Build the review culture now In AI workflows, teams move fastest when AI and humans work side by side, generating and reviewing in small steps. The first draft of a feature is the most important one to get eyes on. Have someone review AI-generated work early and focus on the big-picture questions first, like whether its secure, reliable, and solves the right problem. The leadership challenge is making reviews a priority without slowing anyone down. Have teams aim to give feedback in hours, not days, and encourage finding ways for work to keep moving while reviews happen. This builds momentum while creating a culture that values careful, early oversight over rushing to get something done. Guardrails only work if people use them Safety tools and checks can help catch mistakes, but they dont replace good habits. If a team prioritizes speed over care, AI guardrails just get in the way, and people will find ways around them. Thats why the core of leading in the AI era is cultural: you have to teach people how to integrate AI into their workflow without losing sight of the fundamentals. The teams that get this right will be able to take advantage of the speed AI enables without bleeding quality or trust. The ones that dont will move fast for a while, until they ship something that takes them down. Vibe coding isnt going away, and I think thats a good thing. So long as teams lead with people, not just technology, they will come out ahead and create better experiences for users along the way.
Category:
E-Commerce
Tech giants are making grand promises for the AI age. The technology, we are told, might discover a new generation of medical interventions, and possibly answer some of the most difficult questions facing physics and mathematics. Large language models could soon rival human intellectual abilities, they claim, and artificial superintelligence might even best us. This is exciting, but also scary, they say, since the rise of AGI, or artificial general intelligence, could pose an uncontrollable threat to the human species. U.S. government officials working with AI, including those charged with both implementing and regulating the tech in the government, are taking a different tack. They admit that the government is still falling behind the private sector in implementing LLM tech, and there’s a reason for agencies to speed up adoption. Still, many question the hyperbolic terminology used by AI companies to promote the technology. And they warn that the biggest dangers presented by AI are not those associated with AGI that might rival human abilities, but other concerns, including unreliability and the risk that LLMs are eventually used to undercut democratic values and civil rights. Fast Company spoke with seven people whove worked at the intersection of government and technology on the hype behind AIand what excites and worries them about the technology. Heres what they said. Charles Sun, former federal IT official Sun, a former employee at the Department of Homeland Security, believes AI is, yes, overhypedespecially, he says, when people claim that AI is bigger than the internet. He describes the technology simply as large-scale pattern recognition powered by statistical modeling, noting AIs current wave is impressive but not miraculous. Sun argues that the tech is an accelerator of human cognition, not a replacement for it. I prefer to say that AI will out-process us, not outthink us. Systems can already surpass human capacity in data scale and speed, but intelligence is not a linear metric. We created the algorithms, and we define the rules of their operation. AI in government should be treated as a critical-infrastructure component, not a novelty, he continues. The danger isnt that AI becomes ‘too intelligent,’ but that it becomes too influential without accountability. The real threat is unexamined adoption, not runaway intelligence. Former White House AI official I was worried at the beginning of this . . . when we decided that instead of focusing on mundane everyday use cases for workers, we decided at a national security front that we need to wholesale replace much of our critical infrastructure to support and be used by AI, says the person, who spoke on background. That creates a massive single point of failure for us that depends largely on compute and data centers never failing, and models being impervious to attacksneither of which I don’t think anyone, no matter how technical they are or not, would place their faith in. The former official says theyre not worried about AGI, at least for now: Next token prediction is not nearly enough for us to model complex behaviors and pattern recognition that we would qualify as general intelligence. David Nesting, former White House AI and cybersecurity adviser AI is fantastic at getting insights out of large amounts of data. Those who have AI will be better capable of using data to make better decisions, and to do so in seconds rather than days or weeks. Theres so much data about us out there that hasnt really hurt us because nobodys ever really had the tools to exploit it all, but thats changing quickly, Nesting says. Im worried about the government turning AI against its own people, and Im worried about AI being used to deprive people of their rights in ways that they cant easily understand or appeal. Nesting adds: Im also worried about the government setting requirements for AI models intended to eliminate ‘bias,’ but without a clear definition of what ‘bias’ means. Instead, we get AI models biased toward some ‘official’ ideological viewpoint. Weve already seen this in China: Ask DeepSeek about Tiananmen Square. Will American AI models be expected to maintain an official viewpoint on the January 6th riots? I think were going to be arguing about what AGI means long after its effectively here, he continues. Computers have been doing certain tasks better than people for nearly a century. AI is just expanding that set of tasks more quickly. I think the more alarming milestone will be the point at which AI can be exploited by people to increase their own power and harm others. You dont need AGI for that, and in some ways were already there, Nesting says. Americans today are increasingly and unknowingly interacting online with fake accounts run by AI that are indistinguishable from real peopleeven whole communities of peopleconfirming every fear and anxiety they have, and validating their outrage and hatred. Abigail Haddad, former member of the AI Corps at DHS The biggest problem currently, Haddad argues, is that AI is actually being underused in government. An immense amount of work went into making these tools available inside of federal agencies, she notes, but whats available in the government is still behind whats available commercially. There are concerns about LLMs training on data, but those tools are operating on cloud systems that follow federal cybersecurity standards. People who care about public services and state capacity should be irate at how much is still happening manually and in Excel, she says. Tony Arcadi, former chief information officer of the Treasury Department Computers are already smarter than us. It’s a very nebulous term. What does that really consist of? At least my computer is smarter than me when it comes to complex mathematical calculations, Arcadi says. The sudden emergence of AGI or the singularity, there’s this thing called Rokos basilisk, where the AI will go back in time andI don’t remember the exact thingbut kill people who interfered with this development. I don’t really go for all of that. He adds: The big challenge that I see leveraging AI in government is less around, if you will, the fear factor of the AI gone rogue, but more around the resiliency, reliability, and dependability of AI, which, today, is not great. Eric Hysen, former chief information officer at DHS When asked a few months ago about whether AI might become so powerful that the process of governing might be offloaded to software, Hysen shared the following: I think there is something fundamentally human that Americans expect about their government. . . . Government decision-making, at some level, is fundamentally different than the way private companies make decisions, even if they are of very similar complexity. Some decisions, he added, “we’re always going to want to be fundamentally made by a human being, even if i’s AI-assisted in a lot of ways. It’s going to look more long term like heavy use of AI that will still ultimately feed for a lot of key things to human decision makers. Arati Prabhakar, former science and technology adviser to President Biden Prabhakar, who led the Office of Science and Technology Policy under President Joe Biden, is concerned that the conversation about AGI is being used to influence policy around the technology more broadly. Shes also skeptical that the technology is as powerful as people foretell. I really feel like Im in a freshman dorm room at 2 in the morning when I start hearing those conversations, she says. Your brain is using 20 or 25 watts to do all the things that it does. That includes all kinds of things that are way beyond LLMs. [Its] about 25 watts compared to the mega data centers that it takes to train and then to use AI models. Thats just one hint that we are so far from anything approximating human intelligence, she argues. “Most troubling is it puts the focus on the technology rather than the human choices that are being made in companies by policymakers about what to build, where to use it, and what kind of guardrails really will make it effective.” This story was supported by the Tarbell Center for AI Journalism.
Category:
E-Commerce
Michelle had barely knotted her apron strings before the day turned ugly. When I told her I could only serve regular coffeenot the waffle-flavored one she wantedshe threw the boiling-hot pot at me, she tells Fast Company, recounting one violent encounter with a customer. Working at a popular all-day breakfast chain, Michelle has learned that customer service often means surviving other peoples rage: Ive been cussed out, had hot food thrown on meeven dodged a plate thrown at my head, she says. Lately, the sexual comments from male customers have gotten worse. (Workers in this story have been given pseudonyms to protect them from retaliation.) Still, she shows up, because she hopes to save enough to launch her own business soon. Once upon a time, the customer is king was a rallying cry for better service. Today, its a management mantra gone feral. What began as good business sense, touted by historic retail magnates like Marshall Field and Harry Selfridge, has curdled into a corporate servitude that treats employees as expendable shock absorbers for awful behavior and diva demands. With the holiday rush looming, customer-facing workers in cafés, call centers and car garages are bracing themselves to smile through every clients tantrumno matter how absurd. Rampant hostilityand its getting worse At Michelles workplace, the patron always comes first, while the safety of staff barely makes the list. Even after several viral videos of incidents at the chains restaurants, she says her complaints rarely go anywhere. One of her managers will step in if he sees something on the floor thats out of line, but others just ask what she did to provoke it. It makes me angry, yet I feel I just have to take it, she says. Its an epidemic. That dynamic is baked into North American service culture. The customer is king mantra has become a free pass for people to act however they want, with impunity, says Gordon Sayre, a professor at Emlyon Business School in Lyon, France, who has been studying its impact on employees. It breeds entitlementand that entitlement gets abused, leaving workers with almost no room to push back.The mantra dictates that service staff stay deferentialcareful about their every word and gesturewhile clients hold the upper hand. With some workers getting all of their take-home pay from tips and gratuity, customers can quite literally decide how much an employee earns. And according to Sayres research, that mix of financial power and enforced politeness makes sexual harassment at on the job more likely. The data mirrors reality. In a 2025 survey of 21,000 US frontline workers in healthcare, food service, education, retail, transportation, more than half (53%) said theyd recently faced verbally abusive, threatening or unruly customers. There’s also been a meaningful uptick in customers acting out. According to Arizona State Universitys annual National Customer Rage survey, 43% admit to having raised their voice to show displeasure, up from 35% in 2015. And since 2020, the percentage of customers seeking revenge for their hassles has tripled. Such encounters take a toll: employees on the receiving end are twice as likely to report that their jobs are damaging their physical health, and nearly twice as likely to feel unsafe at work, according to analytics platform Perceptyx. Management didnt back my coworker Madison has been a server for more than a decade, bouncing between casual spots and fine dining rooms. These days, shes at a former Michelin-starred restaurant in New York, and shes long since accepted the industrys devotion to customer is always right. She sees it play out nightly, usually when someone insists a dish isnt cooked properly, or worse, admits they just dont like it. Theres a specific type of persnickety person who gets drunk on the power of being rude and demanding, she tells Fast Company. Once I spot a table with that vibe, I know Im in for a long night. The problem is, the mentality rewards bad behavior. Recently, a diner claimed hed only had one beerwhen it was clearly two. Management didnt back my coworker, and the guy was charged for just one, which ultimately comes out of our tip pool, says Madison. He might have left with a bad taste, but he still got what he wanted. Most hospitality staff Fast Company spoke with said the same thing: comping drinks, desserts, and even entire checks has become routine when someone complains. That generosity, however, comes at a time when restaurants and bars can least afford it. Across the US, the industry is being squeezed from both sidessoaring labor and ingredient costs on one end, and cautious consumer spending on the other. Growth in 2025 has been even slower than during the pandemic lockdown years. So why are so many establishments still giving freebies to difficult customers? Because in the age of online reviews, every unhappy diner is a one-person marketing department, ready to dish out brutal takedowns. A single post can tank a spots reputation, and naming individual staff is common practice. To avoid bad publicity, businesses are trading profit for peace, and making sacrifices to get those all-important five-star ratings. Even a middling three-star review, which most customers equate to a good or average experience, can obliterate visibility on platforms like Yelp or Google.For individual frontline employees, those digital judgments hit harder. A dip in ratings can mean being moved to a slower section or losing a lucrative shift. And in the platform gig economy, where algorithmic rankings rule, a single bad review can mean less work, or none at all. Danielle, a salon owner in Washington, remembers when an unhappy client not only left a bad review, but recruited 200 others to do the same. Ive no idea how she found so many people, but it was traumatizing watching one-star reviews just flood in, she says. Danielle has contacted Google and Yelp in the past, but they refuse to remove reviews. Even on online platforms stuffed with fake and fraudulent bot reviews, the customer is always right, right? Rest assured, well be talking about you behind your back The real problem with the beloved slogan isnt the complaints or stingy tips. Its the emotional contortion required to stay polite while being treated like a punching bag. Rose Hackman, author of Emotional Labor: The Invisible Work Shaping Our Lives and How to Claim Our Power, interviewed service workers across the industries for her boo and found a resounding answer: what counts isnt the service, its the smile. Emotional labor is highly devalued, feminized and rendered invisible, despite it being one of the most central forms of work in our economy, says Hackman. We need to value it more.Of course, that responsibility sits not just with consumers, but with employers too. Until the culture actually changes, employees cope the best they can. Avery, a server in an upmarket seafood restaurant in Philadelphia, has gotten better at protecting herself with age. I used to fold like a beach chair to their needs and demands, but Im less willing now, she explains. Outside of this job, Im a performer, and there are similarities there: I put on a mask, act out a show, then the lights come up, I clock out, and I get to be someone else. Sadly, no coping strategy is perfect. Closing yourself off and faking an emotionalso known as surface actingcan look professional, but it impacts your mood, explains Sayre. Trying to fix the situation or reframe the customers behavior can protect your emotional health, but hurts performance. Instead, venting with trusted coworkers acts as a vital pressure valvea place to express real emotions and recover from the constant stress. Jesse, a New York bartender, is amazed by the rancid behavior he sees on the daily, but the camaraderie with his team keeps him sane. If you walk in and make my life harder, talking to me in a way you would never speak to a friend or your mother; babe, youve decided what our relationship is gonna be, he says. Rest assured, well be talking about you behind your back, laughing and joking about how youre dressed.With customer is king still reigning, America desperately needs a reminder about the inherent social contract of emotional labora contract that only works if respect flows both ways. Without it, the whole system falls apart, leaving behind burnt-out staff and sour customers. As Jesse says: Youre a guest in my home, so I’m gonna take care of you. All you have to do is enjoy your night, and pay me for the work I do.
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||