|
Elon Musk has launched a $97.4 billion bid to take control of OpenAI. The Wall Street Journal reports a group of investors led by Musk's xAI submitted an unsolicited offer to the company's board of directors on Monday. The group wants to buy the nonprofit that controls OpenAI's for-profit arm. When asked for comment, an OpenAI spokesperson pointed Engadget to an X post from CEO Sam Altman. "No thank you but we will buy twitter for $9.74 billion if you want," Altman wrote on the social media platform Musk owns. no thank you but we will buy twitter for $9.74 billion if you want Sam Altman (@sama) February 10, 2025 "Its time for OpenAI to return to the open-source, safety-focused force for good it once was," Musk said in a statement his attorney shared with The Journal. "We will make sure that happens." OpenAI It's hard to say how serious this bid from Musk is and what if any chance it has to succeed. OpenAI is not a traditional company, and the nonprofit structure Sam Altman and others at the company want it to get away from may in fact protect it from Musk's offer. Were OpenAI purely a for-profit company with traditional shares Musk's bid would likely trigger what's known in corporate law as a Revlon moment, where, under certain circumstances, the company's board of directors would be forced to sell the company to the highest bidder to maximize shareholder profits. Musk, as you can imagine, wasn't a fan of Altman's joke, writing "Swindler" in response and later calling him "Scam Altman." This article originally appeared on Engadget at https://www.engadget.com/ai/elon-musk-wants-to-buy-openai-for-974-billion-215221105.html?src=rss
Category:
Marketing and Advertising
A new iPhone update patches a flaw that could allow an attacker to turn off a nearly seven-year-old USB security feature. Apples release notes for iOS 18.3.1 and iPadOS 18.3.1 say the bug, which allowed the deactivation of USB Restricted Mode, may have been exploited in an extremely sophisticated attack against specific targeted individuals. The release notes describe the now-patched security flaw as allowing a physical attack, which suggests the attacker needed the device in hand to exploit it. So, unless your device was hijacked by extremely sophisticated attackers, there was nothing to panic about even before Mondays update. USB Restricted Mode, introduced in iOS 11.4.1, prevents USB accessories from accessing your devices data if it hasnt been unlocked for an hour. The idea is to protect your iPhone or iPad from law enforcement devices like Cellebrite and Graykey. Its also the reason for the message asking you to unlock your device before connecting it to a Mac or Windows PC. Aligned with its typical policy, Apple didnt detail who or what entity used the attack in the wild, only noting that the company is aware of a report that this issue may have been exploited. Security researcher Bill Marczak of the University of Torontos Citizen Lab reported the flaw. In 2016, while in grad school, he discovered the iPhones first known zero-day remote jailbreak, which a cyberwarfare company sold to governments. You can make sure USB Restricted Mode is activated by heading to Settings > Face ID (or Touch ID) & Passcode. Scroll down to Accessories in the list and ensure the toggle is off, which it is by default. Somewhat confusingly, toggling the setting off means the security feature is on because it lists features with allowed access. As usual, you can install the update by heading to Settings > General > Software Update on your iPhone or iPad.This article originally appeared on Engadget at https://www.engadget.com/cybersecurity/apple-patches-iphone-exploit-that-allowed-for-extremely-sophisticated-attack-214237852.html?src=rss
Category:
Marketing and Advertising
Roblox, Discord, OpenAI and Google are launching a nonprofit organization called ROOST, or Robust Open Online Safety Tools, which hopes "to build scalable, interoperable safety infrastructure suited for the AI era." The organization plans on providing free, open-source safety tools to public and private organizations to use on their own platforms, with a special focus on child safety to start. The press release announcing ROOST specifically calls out plans to offer "tools to detect, review, and report child sexual abuse material (CSAM)." Partner companies are providing funding for these tools, and the technical expertise to build them, too. The operating theory of ROOST is that access to generative AI is rapidly changing the online landscape, making the need for "reliable and accessible safety infrastructure" all the more urgent. And rather than expect a smaller company or organization to create their own safety tools from scratch, ROOST wants to provide them, free of charge. Child online safety has been the issue du jour since the Children and Teen's Online Privacy Protection Act (COPPA) and Kids Online Safety Act (KOSA) started making their way through Congress, even though both failed to pass in the House. At least some of the companies involved in ROOST, specifically Google and OpenAI, have also already pledged to stop AI tools from being used to generate CSAM. The child safety issue is even more pressing for Roblox. As of 2020, two-thirds of all US children between nine and 12 play Roblox, and the platform has historically struggled to address child safety. Bloomberg Businessweek reported that the company had a "pedophile problem" in 2024, which prompted multiple policy changes and new restrictions around children's DMs. ROOST won't make all of these problems go away, but should make dealing with them easier for any other organization or company that finds itself in Roblox's position.This article originally appeared on Engadget at https://www.engadget.com/big-tech/roblox-discord-openai-and-google-found-new-child-safety-group-194445241.html?src=rss
Category:
Marketing and Advertising
All news |
||||||||||||||||||
|