Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2025-11-11 11:00:00| Fast Company

A few months ago, I was lying in bed, lightly clutching my phone, when Instagram Reels presented me with a brief video that promised an impossible soap opera: There were animated catswith feline faces but unmistakable human bodiesliving seemingly human lives, including in a human-seeming house and also, for some totally unclear reason, at a seemingly human construction site. There was drama: A female cat appeared to have been knocked up. There was also, somehow, a related love triangle involving two far more muscle-y male cats vying for her affection. None of the cats actually spoke. Yet somehow the plot proceeded, with one cat winning the heroines heart. It was well rendered. It was brain-meltingly inane.  AI slop is now our collective shorthand for short-form digital garbage. Specifically, the term slop evokes liquidy, wasteful goo, threatening to gush over everything. We use this description because the content AI is manufacturing is often low-quality, vulgar, stupid, even nihilist. What decent defense can be mounted for the video I just described, at least to the best of my internet-corroded memory? This output is gross, indeed, sloppy. And its getting everywhere.  AI slop did not emerge from artificial intelligence, generally. (Artificial intelligence has a broad scope, but the term has been around for a few decades and is often associated with machine learning.) Specifically, the term was birthed around 2023 in the aftermath of generative AI, when platforms like ChatGPT and Dall-E became publicly available, according to Google Trends. All of a sudden, everyday internet users could generate all sorts of stuff.  While AI companies sort out a business modeltheyre working on it!the internet public has been left to navigate a subsidized AI free-for-all, where we can render slop into existence with merely some keyword cues and a chatbot. Of course, with mass production comes surplus and, then, refuse. We containerize actual trash because otherwise debris gets on everything else and makes everything less good. AI is, arguably, doing the same on the internet. Its clear we think of a lot of AI as trash, though were not doing much to clean it up.   There are already clear signs of contamination. The arrival of low-cost AI generated content has obviated a certain category of digital parachute journalism: stumbling upon a wacky or concerning online trend, then quickly writing it up without any form of verification. Fox News recently published an attempt at such internet stenographyduring the shutdown, designed to denigrate Supplemental Nutrition Assistance Program (SNAP) beneficiariesonly to later issue a clarification after the outlet learned the videos were created by AI. The confusion goes the other way, too: While reading a clue, Jeopardy! host Ken Jennings recently caught heat for describing something as AI generated when, in fact, it wasnt. The quagmire has even gotten the billionaires. In the aftermath of Zohran Mamdanis victory in New York Citys mayoral election, financier Bill Ackman shared a video of Elon Musk talking about the mayor-elect. Musk is the spokesperson. He is brilliant, incredibly articulate, and spot on, said Ackman of the video, only for the community notes section of X to confirm that the video of Musk was AI. The Notes entry pointed to the videos producers, who note their channel isnt actually affiliated with the SpaceX executive.  Deni Ellis Béchard, a senior technology writer at Scientific American, recently cautioned that the challenge of mass-produced cultural content, of course, isnt new: Innovative technologies always spur new forms of art, but also a largess of worthless bleh. This was also the case with the printing press, the internet, and cinema, he explains. In all of these situations, the point wasnt to forge masterpieces; it was to create rapidly and cheaply, he writes. But the production of new types of slop widens the onramps, allowing more people to participatejust as the Internet and social media birthed bunk but also new kinds of creators. Perhaps because much of massmade culture has been forgettable, original work stands out even clearer against the backdrop of sameness, and audiences begin to demand more of it. Indeed, the world of mass AI creation will inevitably feature some true gems. AI masterpieces, even. But there are real, unfortunate consequences of the real getting all mixed up with the fake, even more than it already was. Sure, there are reasons to think that the search for an objective truth is futile. But the alternative is corrosiveand structuralconfusion. The risk isnt that well miss the AI jewels hidden under slop, but that we, ourselves, will drown in it.  We might even be fogging up the digital panopticon. Its become totally normal to know quite a lot about someones life from their social media. But today, my Instagram Reels, at least, is clogged with bizarrethough, it pains me to admit, engrossingAI videos. These videos are certainly less common on the platforms classic photo grid, but the platform is pushing us to short-form video, anyway, where this slop flourishes.  Eventually, well reach a tipping point where AI overruns organic human activity on the internet. As Axios observed, the web will shift into a bot-to-bot, rather than person-to-person, platform. This, of course, is hard to measure: The whole point is that bots are trying to impersonate humans. Still, one cybersecurity firm recently found that 51% of the internet is now generated by bots. Last year, an analysis published by Wired found, over a multiweek period, that 47% of Medium posts appeared to be generated by AI. The companys leadership seemed totally fine with this, as long as people werent reading the stuff.  But even if we arent reading the trash, its still introducing a new source of duplicity to our collective online knowledge. Eventually, also, the same confusion will come for the machines. A study in Nature published earlier this month found that AI can struggle with significant attribution bias. Even worse: We also find that, while recent models show competence in recursive knowledge tasks, they still rely on inconsistent reasoning strategies, suggesting superficial pattern matching rather than robust epistemic understanding. Most models lack a robust understanding of the factive nature of knowledge, that knowledge inherently requires truth. These limitations necessitate urgent improvements before deploying LMs in high-stakes dmains where epistemic distinctions are crucial. Companies have built powerful facial recognition by slurping up images of faces posted on social media to train detection algorithms. AI faces might complicate this methodology, though. A few months ago, FedScoop reported that Clearview AI, a dystopian operation that scraped hundreds of millions of images from social media to build a highly accurate facial recognition model and then sell that technology to the governmentwas hoping to build a deepfake detector.  LinkedIn recently announced that its now using data from its site for improving Microsofts generative AI models, though much of the site already sounds like an AI bot (Is it? We dont have a way to measure!). AI companies have explored using synthetic data to train AI systemsa reasonable strategy? Perhaps, in some contexts. But it also seems like a bad idea.  In fact, there are tons of concerns about AI contaminating itself. Theres serious worry about a phenomenon called model collapse, for instance. Another Nature study last year found that indiscriminate use of model-generated content in training causes irreversible defects in the resulting models. Theres the possibility of creating an AI feedback loop, corrupting the very real and very true human data that was supposed to, in aggregate, make the technology so powerful. Amid the unctuous praise lobbied toward AI firms, slop seems like a problem for them, too.  The nightmare scenario is something like the Kessler Syndrome, a fancy coinage to describe how humanity is polluting outer space. In low-Earth orbit, space trash (including a lot of dead satellites) frequently hits other trash, powerful collisions that then produce even more space trash, making the entire place cloudier and much harder to navigate and use. A similar future could await artificial intelligence: a whack-a-mole hodgepodge of AI creations and AI detections, all trained on increasingly AI-polluted data.  When you get sloppy, you blur your words and start to stumble. AI may be similarly fallible.


Category: E-Commerce

 

LATEST NEWS

2025-11-11 10:30:00| Fast Company

There are a lot of words marketers cant seem to quit. Unique. Authentic. Real. But these are threadbare clichés, which have all but become nullified due to the erosion of their meaning, a dilution fueled by the desire for brands to be generally, yet specifically, for everyone. But everyone is not a target audience. Its a comfortable void. What brands really need right now isnt another lap around the buzzword block. Its courage. Courage to lean into the one trait that could cut through in a world of algorithms, sameness, and mediocrity. Marketers need to be weirder. If you want a sociological anecdote of how weird wins, look no further than online dating. Dating apps have shown us that people dont actually want the most normal partner. They want quirks that stand out. Hinge data shows that profiles mentioning a niche interestlike a specific video game or obscure hobbyare more likely to get matches than generic I like to travel statements. Marketing works the same way. Generic quality service or trusted partner claims are the equivalent of I love long walks on the beach. Tepid is a turnoff. While being good-looking can get you plenty far, to really connect, you need quirks. Mass marketing, like mass dating, creates fatigue. Precision, passion, and personalizationthe pillars of weirdcreate chemistry. When a brand flies its freak flag high, it shows the right customers: Yes, were your people. The Crocs case Take Crocs. Once the fashion worlds punchline, they leaned into their weirdness with bold collabsfrom KFC bucket Crocs to Balenciaga platform Crocs. Instead of pretending to be a lifestyle brand, they became a cameo brand: something you add to your life in a flash of bold comfort. Their revenue hit $3.96 billion in 2023, up nearly 12% from the year before. Thats what I call laughing all the way to the bank.  Weird is always the evolutionary advantage, the bright feather on a dull bird. Yes, it may feel like a risk to shake off the camouflage, but if your biggest problem becomes being too visible, wouldnt that be a happy day. Weve all heard the phrase unprecedented times so much its basically become elevator music, but unprecedented times are exactly when evolution has the most fun. Charles Darwin called it adaptive radiationspecies diversifying into weird little niches that thrive when old systems collapse. Marketing is in its own adaptive radiation moment. Large language models (LLMs) and Generative AI are both collapsing the funnel and flooding the market with mediocrity and brand doppelgängers. Now more than ever, the average of averages is going to fail to thrive. Grow a horn So, whats a brand to do in this mush of mid? Grow a horn. Sprout a freaky little tail. If everyone else is cranking out the same optimized content marketing thought leadership, weird is the mutation that keeps you from extinction. Just ask Duolingo. Their TikTok presenceanchored by a giant green owl who is somehow equal parts threatening and adorablehas over 10 million followers. Its unhinged, its absurd, and its working. Weird didnt just help them survive. It helped them dominate the landscape and now anyone who tries to emulate that success is just doing a bad Duolingo impression.  Now, absurdism isnt newits just having another renaissance. Whenever people face the unknown or the unbearable, weirdness bubbles up as both coping mechanism and cultural shorthand. Marketers should look to what is breaking though the anxieties of the moment and connecting and why. A giant owl twerking on TikTok, a water brand calling itself Liquid Death, a fast-food chain tweeting in all caps about sauce shortages. These are signals that brands are fluent in the absurdist yet timely language their audiences are already speaking. In an era where sameness is free, weird is priceless. Weird is precision. Weird is passion. Weird is personal. Some call it cringe. I call it survival. And if you want your brand to not just survive but thrive in 2025 and beyond, its time to get a little freaky.


Category: E-Commerce

 

2025-11-11 10:30:00| Fast Company

When it comes to inquiring aboutahemcertain products, shoppers prefer the inhuman touch. That is what we found in a study of consumer habits when it comes to products that traditionally have come with a degree of embarrassmentthink acne cream, diarrhea medication, adult sex toys, or personal lubricant. While brands may assume consumers hate chatbots, our series of studies involving more than 6,000 participants found a clear pattern: When it comes to purchases that make people feel embarrassed, consumers prefer chatbots over human service reps. In one experiment, we asked participants to imagine shopping for medications for diarrhea and hay fever. They were offered two online pharmacies, one with a human pharmacist and the other with a chatbot pharmacist. The medications were packaged identically, with the only difference being their labels for diarrhea or hay fever. More than 80% of consumers looking for diarrhea treatment preferred a store with a clearly nonhuman chatbot. In comparison, just 9% of those shopping for hay fever medication preferred nonhuman chatbots. This is because, participants told us, they did not think chatbots have mindsthat is, the ability to judge or feel. In fact, when it comes to selling embarrassing products, making chatbots look or sound human can actually backfire. In another study, we asked 1,500 people to imagine buying diarrhea pills online. Participants were randomly assigned to one of three conditions: an online drugstore with a human service rep, the same store with a humanlike chatbot with a profile photo and name, or the same store with a chatbot that was clearly botlike in both its name and icon. We then asked participants how likely they would be to seek help from the service agent. The results were clear: Willingness to interact dropped as the agent seemed more human. Interest peaked with the clearly machine-like chatbot and hit its lowest point with the human service rep. Why it matters As a scholar of marketing and consumer behavior, I know chatbots play an increasingly large part in e-retail. In fact, one report found 80% of retail and e-commerce business use AI chatbots or plan to use them in the near future. Companies need to answer two questions: When should they deploy chatbots? And how should the chatbots be designed? Many companies may assume the best strategy is to make bots look and sound more human, intuiting that consumers dont want to talk to machines. But our findings show the opposite can be true. In moments when embarrassment looms large, humanlike chatbots can backfire. The practical takeaway is that brands should not default to humanizing their chatbots. Sometimes the most effective bot is the one that looks and sounds like a machine. What still isnt known So far, weve looked at everyday purchases where embarrassment is easy to imagine, such as hemorrhoid cream, anti-wrinkle cream, personal lubricant, and adult toys. However, we believe the insights extend more broadly. For example, women getting a quote for car repair may be more self-conscious, as this is a purchase context where women have been traditionally more stigmatized. Similarly, men shopping for cosmetic products may feel judged in a category that has traditionally been marketed to women. In contexts like these, companies could deploy chatbotsespecially ones that clearly sound machine-liketo reduce discomfort and provide a better service. But more work is needed to test that hypothesis. The Research Brief is a short take on interesting academic work. Jianna Jin is an assistant professor of marketing at the University of Notre Dames Mendoza College of Business. This article is republished from The Conversation under a Creative Commons license. Read the original article.


Category: E-Commerce

 

Latest from this category

11.11When supply chain partners share data, everyone cuts carbon
11.11MLB will now cap bets on pitches, following the Cleveland Guardians rigging scandal
11.11SoftBank announces it has sold its Nvidia shares for $5.8 billion
11.11Amazon just launched the Everything Store for streaming TV ads
11.11Will full SNAP food benefits resume? The Supreme Court is set to make a decision
11.11Bill to reopen the government passes in the Senate, and now goes to the House
11.11How the 2025 government shutdown will leave a permanent mark on the U.S. economy
11.11CoreWeaves stock price is sinking despite AI mania and soaring revenue. Here is the one reason why
E-Commerce »

All news

11.11States should give industry incentives on time: Union Minister Piyush Goyal
11.11When supply chain partners share data, everyone cuts carbon
11.11MLB will now cap bets on pitches, following the Cleveland Guardians rigging scandal
11.11Learner driver failed theory test 128 times, report says
11.11Lloyds' use of staff account data during pay talks 'concerning'
11.11SoftBank announces it has sold its Nvidia shares for $5.8 billion
11.11Strikes at helicopter manufacturer to go ahead
11.11Amazon just launched the Everything Store for streaming TV ads
More »
Privacy policy . Copyright . Contact form .