|
Bluesky, which has surged in the days following the US election, said on Friday that it wont train on its users posts for generative AI. The declaration stands in stark contrast to the AI training policies of X (Twitter) and Metas Threads. Probably not coincidentally, Blueskys announcement came the same day Xs new terms of service, allowing third-party partners to train on user posts, went into effect. A number of artists and creators have made their home on Bluesky, and we hear their concerns with other platforms training on their data, Bluesky posted (via The Verge) on Friday. We do not use any of your content to train generative AI, and have no intention of doing so. In a follow-up post, the decentralized social platform clarified that it does use AI to help with content moderation. Bluesky uses AI internally to assist in content moderation, which helps us triage posts and shield human moderators from harmful content, the company posted. Bluesky also added that it uses AI in the algorithms powering its Discover feed. None of these are Gen AI systems trained on user content, Bluesky stressed. The Verge points out that Blueskys robots.txt (the policy that dictates what outside parties can scrape from a website) doesnt prevent OpenAI, Google or other leading GenAI companies from crawling its data. The company justified that potential hole by pointing to the platforms open and public nature. Just as robots.txt files dont always prevent outside companies from crawling those sites, the same applies here, spokesperson Emily Liu told The Verge. That said, wed like to do our part to ensure that outside orgs respect user consent and are actively discussing within the team on how to achieve this. Although Bluesky is still the underdog in a race with X and Threads, the platform has picked up steam after the US election. It passed the 15 million user threshold on Wednesday after adding more than a million in the past week. A report from web analytics company SimilarWeb noted that the signup surge coincided with a spike in X deactivations. It found that more than 115,000 US web visitors deactivated their [X] accounts on November 7, more than on any previous day of Elon Musks tenure. In parallel, web traffic and daily active users for Bluesky increased dramatically in the week before the election, and then again after election day.This article originally appeared on Engadget at https://www.engadget.com/social-media/the-suddenly-hot-bluesky-says-it-wont-train-ai-on-your-posts-220034195.html?src=rss
Category:
Marketing and Advertising
A damning report from the Anti-Defamation League published Thursday on the unprecedented amount of racist and violent content on Steam Community has prompted a US Senator to take action. In a letter spotted by The Verge, Senator Mark Warner (D-VA) asked Valve CEO Gabe Newell how he and his company are addressing the issue. My concern is elevated by the fact that Steam is the largest single online gaming digital distribution and social networking platform in the world with over 100 million unique user accounts and a user base similar in scale to that of the traditional social media and social network platforms, Warner wrote. The senator also cited Steams online conduct policy that states users may not upload or post illegal or inappropriate content [including] [real] or disturbing depictions of violence or harass other users or Steam personnel. Valve must bring its content moderation practices in line with industry standards or face more intense scrutiny from the federal government for its complicity in allowing hate groups to congregate and engage in activities that undoubtedly puts Americans at risk, Warner writes. Congress doesnt have the ability to take action on Valve or any platform except to shine light on the problem through letters and committee hearings. The Supreme Court overturned two state laws in June that prevented government officials from communicating with social media companies about objectionable content. This also isnt the first time that Congress has raised concerns with Valve about extremist and racist content created by users or players in one of its products. The Senate Committee on the Judiciary sent a letter to Newell in 2023 to express concerns about players posting and spouting racist language in Valves multiplayer online arena game Dota 2. We reached out to Valve for comment. We will update this story if we receive a statement or reactions from Valve.This article originally appeared on Engadget at https://www.engadget.com/social-media/adls-report-on-racist-steam-community-posts-prompts-a-letter-from-virginia-senator-214243775.html?src=rss
Category:
Marketing and Advertising
Reporters Without Borders (RSF) said this week its pressing criminal charges against X (Twitter) in France related to a Kremlin disinformation campaign that used the nonprofit as a prop to spread fake news. The organization said legal means are its last resort in its fight against the bogus stories, designed to foster pro-Russia and anti-Ukraine sentiment, that festered on the platform. Xs refusal to remove content that it knows is false and deceitful as it was duly informed by RSF makes it complicit in the spread of the disinformation circulating on its platform, RSF director of advocacy Antoine Bernard said in a statement. These legal proceedings seek to remind X, a powerful social media company, and its executives that they can be held criminally responsible if they knowingly provide a platform and tools for disseminating false information, identity theft, misrepresentation, and defamation offences punishable under the French Penal Code, RSF attorney Emmanuel Daoud wrote. RSF published an investigation in September detailing how a fabricated video was planted and spread by Russia on the Elon Musk-owned social platform. The fake clip was made to look like a BBC-produced one, including the news organizations logo. It made the erroneous case that RSF conducted a study that revealed a large number of Ukrainian soldiers sympathizing with Nazism. False claims that Ukraine is a pro-Nazi nation have been a common propaganda tactic used by Russia since its 2022 invasion. The narrative is designed to engender support for the Kremlin-initiated war, which is estimated to have killed a million or more Ukrainian people. RSFs investigation revealed that an account called Patricia, claiming to be a translator in France, planted the seed for the disinformation. However, the report discovered that the accounts profile picture was found on a Russian website featuring photos of blond women designed to make avatars. RSF says that even the accounts name seemed to have been automatically generated by X. In addition, the organization says Grok, Xs AI chatbot with access to live data about the platform, claimed the account has very strong opinions, often in support of Russia and Vladimir Putin, while severely criticizing Ukraine and its supporters in Europe. The investigation found the video then took off, spreading through a chain that included a pro-Kremlin Irish entrepreneur living in Russia, a Kremlin propagandist with a large following on Telegram and even Russian officials. It was also shared by highly influential bloggers known for unflinching support of Vladimir Putin. In this story, the Russian authorities have acted a bit like they were laundering dirty information, an RSF representative said in a video about the investigation (translated from French) in September. They took false information, they laundered it through official channels. And then, this piece of information that wasnt actual information was reintroduced into public discourse to make it look credible. Russias bogus video was widely shared on X and Telegram. Reporters Without Borders says the clips viewership reached half a million combined views by September 13. To capture its frustration with the blow to its credibility, the nonprofit cited the quote (of unknown origin but often attributed to Mark Twain): A lie can travel halfway around the world while the truth is still putting on its shoes. RSF says it filed 10 reports with X of illegal content through the social channels reporting system required by the EUs Digital Services Act (DSA). After a series of rejections from X and requests for additional information which RSF provided none of the reports resulted in the removal of the defamatory content targeting our organisation and its advocacy director, RSF wrote. In July, the US Justice Department said it uncovered and dismantled a Russian propaganda network using nearly 1,000 accounts to push pro-Kremlin posts on X. The DOJ claimed the accounts posed as Americans and were made using AI. In October, The Wall Street Journal reported that Elon Musk held multiple private calls with Vladimir Putin from 2022 into this year, describing the contacts as a closely held secret in government. Xs refusal to remove content that it knows is false and deceitful as it was duly informed by RSF makes it complicit in the spread of the disinformation circulating on its platform, RSF director Bernard wrote in a statement. X provides those who spread falsehoods and manipulate public opinion with a powerful arsenal of tools and unparalleled visibility, while granting the perpetrators total impunity. Its time for X to be held accountable. Pressing criminal charges is the last resort against the disinformation and war propaganda that RSF has fallen victim to, which is proliferating on this Muskian network. This article originally appeared on Engadget at https://www.engadget.com/social-media/reporters-without-borders-says-its-pressing-charges-against-x-200005117.html?src=rss
Category:
Marketing and Advertising
All news |
||||||||||||||||||
|