|
Burger King is getting into the Halloween spirit. The fast food chain just introduced its first-ever Monster Menu to kick off spooky season. According to a news release, the vamped up menu will drop on Sept. 30. “BK fans have come to expect something spirited from us during the Halloween season, and each year we try to bring even more fun to families, said Joel Yashinsky, chief marketing officer, Burger King US&C in the release. Yashinsky continued, This year, weve dialed up the fun and flavor, not only with our Monster Menu line-up complete with themed menu innovation, packaging and a special crown, but also with collectible buckets and Scooby-Doo toyscreating even more experiences for everyone. The menu will feature Halloween-themed twists on fan-favorites, such as the Jack-o-Lantern Whopper, which comes on a bright orange bun topped with black sesame seeds, Vampire Nuggets which come in the shape of fangs, Mummy Mozzarella Fries, and a Franken-Candy Sundae. Of course, there’s a special-edition meal just for kids coming too. The King Jr. Meal, featuring Vampire Nuggets is coming, too, along with a line-up of spooky collectible toys. To make the Monster Menu drop even more exciting, guests can also leave BK with a limited-edition Monster Menu-inspired Halloween Bucket, but not until starting October 13. BK is not the first fast food brand to get in on spooky season. McDonald’s announced it will bring back its Boo Buckets and its Halloween-themed menu this year, too. Wendy’s also introduced Wednesday-inspired Meal of Misfortune, as well as a Frosty Frights Kids’ meal. Last year proved to be a big one for brands sinking their teeth into Halloween. Even businesses like Chipotle and Bush’s Baked Beans had a case of the seasonal scaries, as they sold costumes of their own beloved food itemsmany of which even turned out to be popular selections for the scariest day of the year.
Category:
E-Commerce
AstraZeneca laid out plans on Monday to switch to a direct listing of its shares in the United States, as the drugmaker seeks to maximise gains from a booming U.S. stock market, even as it said it was not exiting London. The decision to remain UK-based and listed there will be of some relief to British investors after media reports suggested the Anglo-Swedish drugmakerLondon’s most valuable companywas considering ditching its UK listing in favour of the U.S. London’s stock market has been shrinking due to companies moving away for higher valuations and access to deeper capital markets elsewhere, particularly the U.S., prompting listing reforms from regulators to score some wins. AstraZeneca said it would list its shares on the New York Stock Exchange and move away from the current depositary receipts structure, with trading expected on February 2, 2026. Trading in fully listed stocks is generally more liquid than in ADRs, attracting more investors. The company will remain headquartered in the UK and listed in London and Stockholm, with the plan subject to a shareholders’ vote on November 3. Its London-listed shares rose roughly 1% on Monday, taking the company’s gains for the year to about 6%. They have underperformed domestic rival GSK, which is up 13.6%, and the UK’s broader, blue-chip FTSE 100 index which has gained 14.2%. Commitment to UK Nearly 22% of AstraZeneca’s shareholder base is from North America, its biggest, according to LSEG data, in line with other top UK-based blue-chip companies. Iain Pyle at Aberdeen Group, a shareholder, said the main takeaway from the announcement was AstraZeneca’s “re-commitment” to the primary listing in the UK. “From our point of view, (AstraZeneca) remains an attractive investment on a fundamental basis, with a broad pipeline still undervalued by the market – the listing location doesn’t alter that view.” AstraZeneca Chair Michel Demare said the proposed “harmonised listing structure” would support the company’s long-term growth strategy. “Enabling a global listing structure will allow us to reach a broader mix of global investors,” he said. A spokesperson for Britain’s Treasury welcomed AstraZeneca retaining its London listing, while the London Stock Exchange said that there would be no change to the drugmaker’s place on the FTSE 100 following the switch. Peel Hunt analysts viewed AstraZeneca’s plans to stay in the UK as positive in the short term, but cautioned that U.S. success might prompt others to follow suit. U.S. investment and visibility Over the past decade, the FTSE 100 has severely underperformed U.S. markets, gaining only 53% while the S&P 500 more than tripled in value. Wall Street indices, the Dow Jones Industrial Average, the S&P 500 and the Nasdaq Composite, hit multiple record highs this month, broadening the market’s appeal. Companies are also ramping up U.S. investments to avoid hefty tariffs threatened by President Donald Trump’s administration. AstraZeneca has pledged to invest $50 billion by 2030 in manufacturing in the U.S., its biggest market by sales. It has also said it will cut some direct-to-patient U.S. drug prices as drugmakers face pressure from the Trump administration to reduce prices. The U.S. market remains pivotal for AstraZeneca, accounting for more than 40% of revenue in 2024. The company is betting on its U.S. expansion and expected launches to reach $80 billion in annual revenue by 2030 and offset generic competition. Earlier this month, AstraZeneca paused a planned 200 million pound ($268.80 million) investment in its research site in Cambridge, England, the latest drugmaker to pull back from the UK, citing a tough business environment. ($1 = 0.7440 pounds) Pushkala Aripaka; Additional reporting by Maggie Fick, Josephine Mason, Sarah Young, Charlie Conchie, and Danilo Masoni, Reuters
Category:
E-Commerce
In the absence of stronger federal regulation, some states have begun regulating apps that offer AI therapy as more people turn to artificial intelligence for mental health advice. But the laws, all passed this year, don’t fully address the fast-changing landscape of AI software development. And app developers, policymakers and mental health advocates say the resulting patchwork of state laws isn’t enough to protect users or hold the creators of harmful technology accountable. The reality is millions of people are using these tools and theyre not going back, said Karin Andrea Stephan, CEO and co-founder of the mental health chatbot app Earkick. ___ EDITORS NOTE This story includes discussion of suicide. If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988. There is also an online chat at 988lifeline.org. ___ The state laws take different approaches. Illinois and Nevada have banned the use of AI to treat mental health. Utah placed certain limits on therapy chatbots, including requiring them to protect users health information and to clearly disclose that the chatbot isnt human. Pennsylvania, New Jersey, and California are also considering ways to regulate AI therapy. The impact on users varies. Some apps have blocked access in states with bans. Others say they’re making no changes as they wait for more legal clarity. And many of the laws don’t cover generic chatbots like ChatGPT, which are not explicitly marketed for therapy but are used by an untold number of people for it. Those bots have attracted lawsuits in horrific instances where users lost their grip on reality or took their own lives after interacting with them. Vaile Wright, who oversees health care innovation at the American Psychological Association, agreed that the apps could fill a need, noting a nationwide shortage of mental health providers, high costs for care, and uneven access for insured patients. Mental health chatbots that are rooted in science, created with expert input, and monitored by humans could change the landscape, Wright said. This could be something that helps people before they get to crisis, she said. Thats not whats on the commercial market currently. That’s why federal regulation and oversight are needed, she said. Earlier this month, the Federal Trade Commission announced it was opening inquiries into seven AI chatbot companies including the parent companies of Instagram and Facebook, Google, ChatGPT, Grok (the chatbot on X), Character.AI and Snapchat on how they “measure, test and monitor potentially negative impacts of this technology on children and teens. And the Food and Drug Administration is convening an advisory committee Nov. 6 to review generative AI-enabled mental health devices. Federal agencies could consider restrictions on how chatbots are marketed, limit addictive practices, require disclosures to users that they are not medical providers, require companies to track and report suicidal thoughts, and offer legal protections for people who report bad practices by companies, Wright said. Not all apps have blocked access From “companion apps to AI therapists to mental wellness apps, AIs use in mental health care is varied and hard to define, let alone write laws around. That has led to different regulatory approaches. Some states, for example, take aim at companion apps that are designed just for friendship, but don’t wade into mental health care. The laws in Illinois and Nevada ban products that claim to provide mental health treatment outright, threatening fines up to $10,000 in Illinois and $15,000 in Nevada. But even a single app can be tough to categorize. Earkick’s Stephan said there is still a lot that is very muddy about Illinois’ law, for example, and the company has not limited access there. Stephan and her team initially held off calling their chatbot, which looks like a cartoon panda, a therapist. But when users began using the word in reviews, they embraced the terminology so the app would show up in searches. Last week, they backed off using therapy and medical terms again. Earkicks website described its chatbot as Your empathetic AI counselor, equipped to support your mental health journey, but now its a chatbot for self care. Still, were not diagnosing, Stephan maintained. Users can set up a panic button to call a trusted loved one if they are in crisis and the chatbot will “nudge users to seek out a therapist if their mental health worsens. But it was never designed to be a suicide prevention app, Stephan said, and police would not be called if someone told the bot about thoughts of self-harm. Stephan said she’s happy that people are looking at AI with a critical eye, but worried about states’ ability to keep up with innovation. “The speed at which everything is evolving is massive, she said. Other apps blocked access immediately. When Illinois users download the AI therapy app Ash, a message urges them to email their legislators, arguing misguided legislation has banned apps like Ash “while leaving unregulated chatbots it intended to regulate free to cause harm. A spokesperson for Ash did not respond to multiple requests for an interview. Mario Treto Jr., secretary of the Illinois Department of Financial and Professional Regulation, said the goal was ultimately to make sure licensed therapists were the only ones doing therapy. Therapy is more than just word exchanges, Treto said. “It requires empathy, it requires clinical judgment, it requires ethical responsibility, none of which AI can truly replicate right now. One chatbot company is trying to fully replicate therapy In March, a Dartmouth University-based team published the first known randomized clinical trial of a generative AI chatbot for mental health treatment. The goal was to have the chatbot, called Therabot, treat people diagnosed with anxiety, depression or eating disorders. It was trained on vignettes and transcripts written by the team to illustrate an evidence-based response. The study found users rated Therabot similar to a therapist and had meaningfully lower symptoms after eight weeks compared with people who didn’t use it. Every interaction was monitored by a human who intervened if the chatbots response was harmful or not evidence-based. Nicholas Jacobson, a clinical psychologist whose lab is leading the research, said the results showed early promise but that larger studies are needed to demonstrate whether Therabot works for large numbers of people. The space is so dramatically new that I think the field needs to proceed with much greater caution that is happening right now, he said. Many AI apps are optimized for engagement and are built to support everything users say, rather than challenging peoples thoughts the way therapists do. Many walk the line of companionship and therapy, blurring intimacy boundaries therapists ethically would not. Therabots team sought to avoid those issues. The app is still in testing and not widely available. But Jacobson worries about what strict bans will mean for developers taking a careful approach. He noted Illinois had no clear pathway to provide evidence that an app is safe and effective. They want to protect folks, but the traditional system right now is really failing folks, he said. So, trying to stick with the status quo is really not the thing to do. Regulators and advocates of the laws say they are open to changes. But today’s chatbots are not a solution to the mental health provider shortage, said Kyle Hillman, who lobbied for the bills in Illinois and Nevada through his affiliation with the National Association of Social Workers. Not everybody who’s feeling sad needs a therapist, he said. But for people with real mental health issues or suicidal thoughts, “telling them, I know that theres a workforce shortage but here’s a bot’ that is such a privileged position. ___ The Associated Press Health and Science Department receives support from the Howard Hughes Medical Institutes Department of Science Education and the Robert Wood Johnson Foundation. The AP is solely responsible for all content. Devi Shastri, Associated Press health writer
Category:
E-Commerce
All news |
||||||||||||||||||
|