|
If there wasnt enough to be worried about, DIY Botox is now trending. Across social media, people have been uploading close-ups of their foreheads mapped out with tiny dots and offering step-by-step advice on how to inject Botox-like products they purchased online. (Yes, it is as dangerous as it sounds.) Probably needed a hug, one TikToker wrote. Learned how to do my own tox instead. On the Reddit forum r/DIYCosmeticProcedures, members also share tips for injecting themselves at home with everything from fat-dissolving injections to dermal fillers and Botox. One of the most popular Botox alternatives that has emerged online, with references often accompanied by discount codes, is Innotox, a Korean over-the-counter injectable. Like Botox, Innotox is a neurotoxin that contains botulinum toxin type A. Unlike Botox, which comes as a powder, it arrives as a ready-to-use liquid, making it convenient for self-administration. Also, unlike Botox, which is FDA approved, Innotox is not authorized in the U.K. or U.S. Botulinum toxin Awhich, when injected, blocks nerve signals to stop muscles in the face from moving, thereby reducing the appearance of wrinklesis highly toxic and can have catastrophic effects if it’s not administered safely and properly. Thats assuming you can even be sure that what youve purchased online is the authentic product. Just as blondes going back to their natural hair color has become a well-known recession indicator, people are now injecting their own faces to save a few hundred dollars. But while one product leaves you with a darker shade of hair, the other could leave you permanently disfigured. Doctors and licensed injectors are shouting from the rooftops about the risks, and sometimes fatal side effects, of incorrect administration of the neurotoxin. And no, its not because theyre worried about losing business. Medytox, which produces Innotox, recently opened an investigation into the unauthorized importation of Innotox to the U.S., U.K., and other countries. Botulinum toxins should be administered only by qualified healthcare professionals in a medically appropriate setting, Tom Albright, CEO of Luvantas, a subsidiary of Medytox, told The Guardian. Administration requires a deep understanding of facial anatomy and aesthetic principles, which cannot be replicated in consumer-administered or unregulated environments. If licensed professionals arent even self-injecting Botox at home, theres probably a reason why.
Category:
E-Commerce
If your lunch order is a little lacking in portion size, your name might have something to do with itat least according to some social media users. A viral theory claims that takeout orders from fast-casual chains with mens names receive heftier portions than those with womens names. After months, even years, of gender-related speculation circulating online, one TikTok creator decided to put the theory to the test and conduct a series of experiments. She placed identical orders at Chipotle, one using her own name, Emily Joy Lemus, and the other using the name Andrew.” Holding the bowls side by side, to the naked eye there does seem to be a perceivable, if marginal, difference, with Andrews order piled slightly higher. Still skeptical, those in the comments demanded scientific proof. Lemus ran the experiment again, but this time she weighed the two identical orders on a food scaleone under the name Tom this time, the other under her own name. While again, the difference wasnt unequivocal to the eye, the scale told a different story. Toms bowl clocked in at 714 grams, but Emilys was only 686 grams, a nearly 30 gram gap. This is insane that this is a hack, she said. On Reddit and Threads, others reported that the trick has worked for them, too. It’s frustrating having to pay for extra portions when men get that much just for being men, one Reddit user wrote. Some, however, remain unconvinced. As a current employee, nobody is looking at names, one Chipotle employee assured. A Chipotle manager added, we make the orders as quickly as possible for anyone no matter what gender you are. Others suspect it is simply a matter of different employees being more or less heavy-handed with their scoops. Fast Company has reached out to Chipotle for comment on the portion size debate. Recent research does lend credibility to the idea of an unconscious bias in portion size when it comes to perceived gender. A 2025 study published in the Journal of Experimental Social Psychology found participants associated men with larger portions, while women were expected to be satisfied with less. However, the study didnt offer any particularly strong evidence that this impacted behavior. Wanting to further test her theory, Lemus also took her informal experiment to the Mediterranean fast-casual chain Cava. This time, she found the opposite to be true: The order with the womans name clocked in heavier than the one with the man’s. What that tells me is that Cava is for the girls, she said.
Category:
E-Commerce
As AI chatbots become ubiquitous, states are looking to put up guardrails around AI and mental health before its too late. With millions of people turning to AI for advice, chatbots have begun posing as free, instant therapists a phenomenon that, right now, remains almost completely unregulated. In the vacuum of regulation on AI, states are stepping in to quickly erect guardrails where the federal government hasnt. Earlier this month, Illinois Governor JB Pritzker signed a bill into law that limits the use of AI in therapy services. The bill, the Wellness and Oversight for Psychological Resources Act, blocks the use of AI to provide mental health and therapeutic decision-making, while still allowing licensed mental health professionals to employ AI for administrative tasks like note taking. The risks inherent in non-human algorithms doling out mental health guidance are myriad, from encouraging recovering addicts to have a small hit of meth to engaging young users so successfully that they withdraw from their peers. One recent study found that nearly a third of teens find conversations with AI as satisfying or more satisfying than real-life interactions with friends. States pick up the slack, again In Illinois, the new law is designed to protect patients from unregulated and unqualified AI products, while also protecting the jobs of Illinois thousands of qualified behavioral health providers, according to the Illinois Department of Financial & Professional Regulation (IDFPR), which coordinated with lawmakers on the legislation. The people of Illinois deserve quality healthcare from real, qualified professionals and not computer programs that pull information from all corners of the internet to generate responses that harm patients, IDFPR Secretary Mario Treto, Jr said. Violations of the law can result in a $10,000 fine. Illinois has a history of successfully regulating new technologies. The states Biometric Information Privacy Act (BIPA), which governs the use of facial recognition and other biometric systems for Illinois residents, has tripped up many tech companies accustomed to operating with regulatory impunity. That includes Meta, a company thats now all-in on AI, including chatbots like the ones that recently made chats some users believed to be private public in an open feed. Earlier this year, Nevada enacted its own set of new regulations on the use of AI in mental health services, blocking AI chatbots from representing themselves as capable of or qualified to provide mental or behavioral health care. The law also prevents schools from using AI to act as a counselor, social worker or psychologist or from performing other duties related to the mental health of students. Earlier this year, Utah added its own restrictions around the mental health applications of AI chatbots, though its regulations dont go as far as Illinois or Nevada. The risks are serious In February, the American Psychological Association met with U.S. regulators to discuss the dangers of AI chatbots pretending to be therapists. The group presented its concerns to an FTC panel, citing a case last year of a 14-year-old in Florida who died by suicide after becoming obsessed with a chatbot made bt the company Character.AI. They are actually using algorithms that are antithetical to what a trained clinician would do, APA Chief Executive Arthur C. Evans Jr. told The New York Times. Our concern is that more and more people are going to be harmed. People are going to be misled, and will misunderstand what good psychological care is. Were still learning more about those risks. A recent study out of Stanford found that chatbots marketing themselves for therapy often stigmatized users dealing with serious mental health issues and issued responses that could be inappropriate or even dangerous. LLM-based systems are being used as companions, confidants, and therapists, and some people see real benefits, co-author and Stanford Assistant Professor Nick Haber said. But we find significant risks, and I think its important to lay out the more safety-critical aspects of therapy and to talk about some of these fundamental differences.
Category:
E-Commerce
All news |
||||||||||||||||||
|