|
|||||
Meta will no longer allow teens to chat with its AI chatbot characters in their present form. The company announced Friday that it will be "temporarily pausing teens access to existing AI characters globally."The pause comes months after Meta added chatbot-focused parental controls following reports that some of Meta's character chatbots had engaged in sexual conversations and other alarming interactions with teens. Reuters reported on an internal Meta policy document that said the chatbots were permitted to have "sensual" conversations with underage users, language Meta later said was "erroneous and inconsistent with our policies." The company announced in August that it was re-training its character chatbots to add "guardrails as an extra precaution" that would prevent teens from discussing self harm, disordered eating and suicide. Now, Meta says it will prevent teens from accessing any of its character chatbots regardless of their parental control settings until "the updated experience is ready." The change, which will be starting "in the coming weeks," will apply to those with teen accounts, "as well as people who claim to be adults but who we suspect are teens based on our age prediction technology." Teens will still be able to access the official Meta AI chatbot, which the company says already has "age-appropriate protections in place." Meta and other AI companies that make "companion" characters have faced increasing scrutiny over the safety risks these chatbots could pose to young people. The FTC and the Texas attorney general have both kicked off investigations into Meta and other companies in recent months. The issue of chatbots has also come up in the context of a safety lawsuit brought by New Mexico's attorney general. A trial is scheduled to start early next month; Meta's lawyers have attempted to exclude testimony related to the company's AI chatbots, Wired reported this week.This article originally appeared on Engadget at https://www.engadget.com/social-media/meta-is-temporarily-pulling-teens-access-from-its-ai-chatbot-characters-180626052.html?src=rss
Category:
Marketing and Advertising
Microsoft CEO Satya Nadella recently went on record saying that AI still needs to prove its worth if society is to adopt it long-term, but he presumably thinks his company has cracked it with its latest innovation: AI coloring books. A new Microsoft Paint feature currently rolling out to Windows Insiders allows you to generate coloring book pages based on the text prompt you enter. The example Microsoft uses is "a cute fluffy cat on a donut," to which the AI tool will spit out a set of slightly different options based on your prompt. You can then choose which image you want, add it to your current workspace, copy or save it. Presumably you can also print it out for the purpose of entertaining your kids. No doubt the kind of real-world impact the Microsoft chief was alluding to. The coloring book feature is exclusive to Copilot+ PCs, and Microsoft is also adding a fill tolerance slider that lets you adjust the precision with which the Fill tool adds color to your canvas. As well as Paints new Coloring book feature, Microsoft has also improved its Write, Rewrite and Summarize AI functionality in Notepad, which integrates with GPT to fine-tune your writing and summarize complex notes. Youll need to sign into your Microsoft account to use cloud features, but results will now appear more quickly and let you interact with the preview without having to wait for its full response. Again, youll need to be Windows Insider in the Canary and Dev channels on Windows 11 to take advantage of the updates initially.This article originally appeared on Engadget at https://www.engadget.com/ai/you-can-now-create-ai-generated-coloring-books-in-microsoft-paint-163512527.html?src=rss
Category:
Marketing and Advertising
After being one of the first countries in the world to block Elon Musks Grok chatbot, Malaysia has now lifted its ban. Along with Indonesia, the country moved swiftly to temporarily halt access to X's frequently controversial AI chatbot earlier this month, after multiple reports emerged of it being used to generate deepfake sexualized images of people, including women and children. At the time, the Malaysian Communications and Multimedia Commission (MCMC) said the restrictions would remain in place until X Corp and parent xAI could prove they had enforced the necessary safeguards against misuse of the above nature. Malaysian authorities appear to be taking X at its word, after the MCMC released a statement confirming it was satisfied that Musks company has implemented the required safety measures. It added that the authorities will continue to monitor the social media platform, and that any further user safety breaches or violations of Malaysian laws would be dealt with firmly. At the time of writing, only Malaysia and Indonesia have hit Grok with official bans, though UK regulator Ofcom opened a formal investigation into X under the countrys Online Safety Act, in the wake of the non-consensual sexual deepfake scandal. X has since changed its image-editing policies, and on January 14 the company said Grok will no longer allow "the editing of images of real people in revealing clothing such as bikinis." Earlier this week, the UK-based non-profit, the Center for Countering Digital Hate (CCDH), estimated that in the 11-day period between December 29 and January 9, Grok generated approximately 3 million sexualized images, around 23,000 of which were of children.This article originally appeared on Engadget at https://www.engadget.com/ai/malaysia-lifts-ban-on-grok-after-taking-x-at-its-word-144457468.html?src=rss
Category:
Marketing and Advertising
All news |
||||||||||||||||||
|
||||||||||||||||||