|
A stable “release” version of Apple’s iOS 26 is due in September, but you can now try an in-progress version, called the public beta. It previews a revamped interface and new features in apps like Messages and Phone (both with spam filtering), Camera, Wallet, and especially CarPlay. Models starting with the iPhone 15 Pro also get upgrades to the Apple Intelligence AI suite, including live translation, improved image creation, and the ability to search visually across apps. The translucent Liquid Glass interface is seeing a bit of a revival in areas such as Notification Center, after Apple toned it down in earlier betas. Is the iOS26 public beta safe to install? The public beta follows four developer betas meant for app creators (although others tend to install betas out of curiosity). Adding the word public doesn’t mean this beta is without risks. To get it, you have to accept an agreement that absolves Apple of responsibility for any problems it may cause. This includes brickingrendering the phone inoperable. It’s safest to test the public beta on a spare device, which Apple’s beta site strongly recommends. You can lower the risk to an old model or your current one by first backing up your iPhone and learning how to unbrick and roll it back to the latest release version of iOS 18. We’ll walk you through how to do that further down. These tips may also help with glitches you may encounter in the release version. How to get iOS 26 public beta First, check whether your iPhone supports iOS 26. Apple’s list includes models back to 2019s iPhone 11 and 2020s SE (2nd generation), both using the A13 Bionic chip. If you have an iPhone X or earlier model, it may show an option to download iOS 26, but won’t let you. Getting the beta is easy: Visit the Apple Beta site, click Sign Up, and log in with the same Apple ID your iPhone uses. Signing up provides access to all Apple OS 26 betas: iOS, iPadOS, macOS, watchOS, and tvOS, plus HomePod software. Does installing iOS 26 public beta void my warranty? According to Apple’s FAQ, installing the beta wont void your hardware warranty, although you will have to restore to a stable OS version before getting service. Apple Beta Software Program login screen for signing in with Apple ID But within the the roughly 5,500-word Apple Beta Software Agreement is the clause: “APPLE SHALL NOT BE RESPONSIBLE FOR ANY COSTS, EXPENSES OR OTHER LIABILITIES YOU MAY INCUR . . . INCLUDING BUT NOT LIMITED TO ANY DAMAGE TO ANY EQUIPMENT, SOFTWARE OR DATA. (Fast Company has asked Apple to clarify whether “equipment” would include the iPhone hardware and will update if we get an answer.) The agreement does say that the company may provide support through its beta program, at Apples option. TL;DR: Dont count on help, and take your own precautions. How to back up your iPhone before installing iOS 26 public beta Before you do anything, first back up your iPhone. The easiest way is online: Go to Settings, then click your name, iCloud > iCloud Backup. Apple provides 5GB of free storage. Paid tiers start at 50GB for $0.99 per month and 200GB for $2.99. You can also back up to a computer over USB. In recent versions of macOS: Open Finder, click your iPhone in the left panel, then click Back Up Now. Windows or macOS Mojave (10.14) and earlier should use iTunes. (Yes, it’s still out there.) Click the Device button near the top left of the iTunes window, then Click Summary > Back Up Now. Before installing iOS 26 public beta, note these backup options in your macOS Finder. How to download and install iOS26 public beta on your iPhone Now comes the main event. On your iPhone, click Settings > General > Software Update. Click Beta Updates to see multiple options on the next screen, including Off and possibly betas for several versions of iOS. Click to place a check mark next to iOS 26 Public Beta. Then tap the back button and click Update Now. iPhone screenshots showing how to select iOS 26 Public Beta from the Beta Updates menu How to roll back from iOS 26 beta In the event that iOS 26 does brick your phone, a new tool called Recovery Assistant may automatically activate, allowing you to monitor the process wirelessly from another Apple device. If Recovery Assistant doesnt appear or work, try the old-fashioned way: Connect the iPhone to your computer over USB and open Finder or iTunes (depending on your computer operating system). Press and release the iPhone’s volume up button, then the volume down button. Then press and hold the side button until you see the Recovery Mode screen with cable and computer icons. If you dont see them, throw yourself at the mercy of Apple Support by contacting them online. MacOS Finder shows iPhone in Recovery Mode with the option to update or restore for iOS 26 beta installation. If you do, Finder or iTunes will show the message There is a problem with the iPhone that requires it to be updated or restored. Its best to select the Restore option, which erases the Phone and installs the latest public release of iOS. Then restore the deeted data and settings from your backup.
Category:
E-Commerce
Every CEO I know wants their team to use AI more, and for good reason: it can supercharge almost every area of their business and make employees vastly more efficient. Employee use of AI is a business imperative, but as it becomes more common, how can companies avoid major security headaches? Sift’s latest data found that 31% of consumers admit to entering personal or sensitive information into GenAI tools like ChatGPT, and 14% of those individuals explicitly reported entering company trade secrets. Other types of information that people admit to sharing with AI chatbots include financial details, nonpublic facts, email addresses, phone numbers, and information about employers. At its core, it reveals that people are increasingly willing to trust AI with sensitive information. This overconfidence with AI isn’t limited to data sharing. The same comfort level that leads people to input sensitive work information also makes them vulnerable to deepfakes and AI-generated scams in their personal lives. Sift data found that concern that AI would be used to scam someone has decreased 18% in the last year, and yet the number of people who admit to being successfully scammed has increased 62% since 2024. Whether it’s sharing trade secrets at work or falling for scam texts at home, the pattern is the same: familiarity with AI is creating dangerous blind spots. The Confidence Trap Often in a workplace setting, employees turn to AI to address a specific problem: looking for examples to round out a sales proposal, pasting an internal email to “punch it up,” sharing nonfinal marketing copy for tone suggestions, or disclosing product road map details with a customer service bot to help answer a complex ticket. This behavior often stems from good intentions, whether that’s trying to be more efficient, helpful, or responsive. But as the data shows, digital familiarity can create a false sense of security. The people who think they “get AI” are the ones most likely to leak sensitive data through it or will struggle to identify malicious content. Every time an employee drops nonpublic context into a GenAI tool, they areknowingly or nottransmitting business-sensitive data into a system that may log, store, or even use it to train future outputs. Not to mention, if a data leak were ever to occur, a hacker would be privy to a treasure trove of confidential information. So what should businesses do? The challenge with this kind of data exposure is that traditional monitoring won’t catch this. Because these tools are often used outside of a companys intranettheir internal software networkemployees are able to input almost any data they can access. The uncomfortable truth is that you probably can’t know exactly what sensitive information your employees are sharing with AI platforms. Unlike a phishing attack where you can trace the breach, AI data sharing often happens in the shadows of personal accounts. But that doesnt mean you should ban AI usage outright. Try to infer the scale of the problem with anonymous employee surveys. Ask: What AI tools are you using? For which tasks do you find AI most helpful? And what do you wish AI could do? While an employee may not disclose sharing sensitive information with a chatbot, understanding more generally how your team is using AI can identify potential areas of concernand potential opportunities. Instead of trying to track every instance retroactively, focus on prevention. A blanket AI ban isn’t realistic and puts your organization at a competitive disadvantage. Instead, establish clear guidelines that distinguish between acceptable and prohibited data types. Set a clear red line on what can’t be entered into public GenAI tools: customer data, financial information, legal language, and internal documents. Make it practical, not paranoid. To encourage responsible AI use, provide approved alternatives. Create company-sanctioned AI workflows for everyday use cases that don’t retain data or are only used in tools that do not use any inputs for AI training. Make sure your IT teams vet all AI tools for proper data governance. This is especially important because different account types of AI tools have different data retention policies. Furthermore, it helps employees understand the potential dangers of sharing sensitive data with AI chatbots. Encourage employee training that addresses both professional and personal AI risks. Provide real-world examples of how innocent AI interactions inadvertently expose trade secrets, but also educate employees about AI-powered scams they might encounter outside of work. The same overconfidence that leads to workplace data leaks can make employees targets for sophisticated fraud schemes, potentially compromising both personal and professional security. If you discover that sensitive information has been shared with AI platforms, act quickly, but don’t panic. Document what was shared, when, and through which platform. Conduct a risk assessment that asks: How sensitive was the information? Could it compromise competitive positioning or regulatory compliance? You may need to notify affected parties, depending on the nature of the data. Then, use these incidents as learning opportunities. Review how the incident occurred and identify the necessary safeguards. While the world of AI chatbots has changed since 2023, there is a lot we can learn from a situation Samsung experienced a few years ago, when employees in their semiconductor division shared source code, meeting notes, and test sequences with ChatGPT. This exposed proprietary software to OpenAI and leaked sensitive hardware testing methods. Samsung’s response was swift: they restricted ChatGPT uploads to minimize the potential for sharing sensitive information, launched internal investigations, and began developing a company-specific AI chatbot to prevent future leaks. While most companies lack the resources to build chatbots themselves, they can achieve a similar approach by using an enterprise-grade account that specifically opts out their accounts from AI training. AI can bring massive productivity gains, but that doesnt make its usage risk-free. Organizations that anticipate and address this challenge will leverage AI’s benefits while maintaining the security of their most valuable information. The key is recognizing that AI overconfidence poses risks both inside and outside the office, and preparing accordingly.
Category:
E-Commerce
UnitedHealth Group says it is cooperating with federal criminal and civil investigations involving its market-leading Medicare business. The health care giant said Thursday that it had contacted the Department of Justice after reviewing media reports about investigations into certain elements of its business. (UnitedHealth) has a long record of responsible conduct and effective compliance, the company said in a Securities and Exchange Commission filing. Earlier this year, The Wall Street Journal said federal officials had launched a civil fraud investigation into how the company records diagnoses that lead to extra payments for its Medicare Advantage, or MA, plans. Those are privately run versions of the governments Medicare coverage program mostly for people ages 65 and over. The companys UnitedHealthcare business covers more than 8 million people as the nations largest provider of Medicare Advantage plans. The business has been under pressure in recent quarters due to rising care use and rate cuts. The Journal said in February, citing anonymous sources, that the probe focused on billing practices in recent months. The paper has since said that a federal criminal health care-fraud unit was investigating how the company used doctors and nurses to gather diagnoses that bolster payments. UnitedHealth said in the filing Thursday that it “has full confidence in its practices and is committed to working cooperatively with the Department throughout this process.” UnitedHealth Group Inc. runs one of the nation’s largest health insurance and pharmacy benefits management businesses. It also operates a growing Optum business that provides care and technology support. UnitedHealth raked in more than $400 billion in revenue last year as the third-largest company in the Fortune 500. Its share price topped $630 last fall to reach a new all-time high. But the stock has mostly shed value since December, when UnitedHealthcare CEO Brian Thompson was fatally shot in midtown Manhattan on his way to the companys annual investor meeting. A suspect, Luigi Mangione, has been charged in connection with the shooting. In April, shares plunged some more after the company cut its forecast due to a spike in health care use. A month later, former CEO Andrew Witty resigned, and the company withdrew its forecast entirely, saying that medical costs from new Medicare Advantage members were higher than expected. The stock price slipped another 3%, or $10.35, to $282.16 in midday trading Thursday. That represents a 55% drop from its all-time high. The Dow Jones Industrial Average, of which UnitedHealth is a component, also fell slightly. Meanwhile, the broader S&P 500 rose. UnitedHealth will report its second-quarter results next Tuesday. Tom Murphy, AP health writer
Category:
E-Commerce
All news |
||||||||||||||||||
|