|
Every CEO I know wants their team to use AI more, and for good reason: it can supercharge almost every area of their business and make employees vastly more efficient. Employee use of AI is a business imperative, but as it becomes more common, how can companies avoid major security headaches? Sift’s latest data found that 31% of consumers admit to entering personal or sensitive information into GenAI tools like ChatGPT, and 14% of those individuals explicitly reported entering company trade secrets. Other types of information that people admit to sharing with AI chatbots include financial details, nonpublic facts, email addresses, phone numbers, and information about employers. At its core, it reveals that people are increasingly willing to trust AI with sensitive information. This overconfidence with AI isn’t limited to data sharing. The same comfort level that leads people to input sensitive work information also makes them vulnerable to deepfakes and AI-generated scams in their personal lives. Sift data found that concern that AI would be used to scam someone has decreased 18% in the last year, and yet the number of people who admit to being successfully scammed has increased 62% since 2024. Whether it’s sharing trade secrets at work or falling for scam texts at home, the pattern is the same: familiarity with AI is creating dangerous blind spots. The Confidence Trap Often in a workplace setting, employees turn to AI to address a specific problem: looking for examples to round out a sales proposal, pasting an internal email to “punch it up,” sharing nonfinal marketing copy for tone suggestions, or disclosing product road map details with a customer service bot to help answer a complex ticket. This behavior often stems from good intentions, whether that’s trying to be more efficient, helpful, or responsive. But as the data shows, digital familiarity can create a false sense of security. The people who think they “get AI” are the ones most likely to leak sensitive data through it or will struggle to identify malicious content. Every time an employee drops nonpublic context into a GenAI tool, they areknowingly or nottransmitting business-sensitive data into a system that may log, store, or even use it to train future outputs. Not to mention, if a data leak were ever to occur, a hacker would be privy to a treasure trove of confidential information. So what should businesses do? The challenge with this kind of data exposure is that traditional monitoring won’t catch this. Because these tools are often used outside of a companys intranettheir internal software networkemployees are able to input almost any data they can access. The uncomfortable truth is that you probably can’t know exactly what sensitive information your employees are sharing with AI platforms. Unlike a phishing attack where you can trace the breach, AI data sharing often happens in the shadows of personal accounts. But that doesnt mean you should ban AI usage outright. Try to infer the scale of the problem with anonymous employee surveys. Ask: What AI tools are you using? For which tasks do you find AI most helpful? And what do you wish AI could do? While an employee may not disclose sharing sensitive information with a chatbot, understanding more generally how your team is using AI can identify potential areas of concernand potential opportunities. Instead of trying to track every instance retroactively, focus on prevention. A blanket AI ban isn’t realistic and puts your organization at a competitive disadvantage. Instead, establish clear guidelines that distinguish between acceptable and prohibited data types. Set a clear red line on what can’t be entered into public GenAI tools: customer data, financial information, legal language, and internal documents. Make it practical, not paranoid. To encourage responsible AI use, provide approved alternatives. Create company-sanctioned AI workflows for everyday use cases that don’t retain data or are only used in tools that do not use any inputs for AI training. Make sure your IT teams vet all AI tools for proper data governance. This is especially important because different account types of AI tools have different data retention policies. Furthermore, it helps employees understand the potential dangers of sharing sensitive data with AI chatbots. Encourage employee training that addresses both professional and personal AI risks. Provide real-world examples of how innocent AI interactions inadvertently expose trade secrets, but also educate employees about AI-powered scams they might encounter outside of work. The same overconfidence that leads to workplace data leaks can make employees targets for sophisticated fraud schemes, potentially compromising both personal and professional security. If you discover that sensitive information has been shared with AI platforms, act quickly, but don’t panic. Document what was shared, when, and through which platform. Conduct a risk assessment that asks: How sensitive was the information? Could it compromise competitive positioning or regulatory compliance? You may need to notify affected parties, depending on the nature of the data. Then, use these incidents as learning opportunities. Review how the incident occurred and identify the necessary safeguards. While the world of AI chatbots has changed since 2023, there is a lot we can learn from a situation Samsung experienced a few years ago, when employees in their semiconductor division shared source code, meeting notes, and test sequences with ChatGPT. This exposed proprietary software to OpenAI and leaked sensitive hardware testing methods. Samsung’s response was swift: they restricted ChatGPT uploads to minimize the potential for sharing sensitive information, launched internal investigations, and began developing a company-specific AI chatbot to prevent future leaks. While most companies lack the resources to build chatbots themselves, they can achieve a similar approach by using an enterprise-grade account that specifically opts out their accounts from AI training. AI can bring massive productivity gains, but that doesnt make its usage risk-free. Organizations that anticipate and address this challenge will leverage AI’s benefits while maintaining the security of their most valuable information. The key is recognizing that AI overconfidence poses risks both inside and outside the office, and preparing accordingly.
Category:
E-Commerce
UnitedHealth Group says it is cooperating with federal criminal and civil investigations involving its market-leading Medicare business. The health care giant said Thursday that it had contacted the Department of Justice after reviewing media reports about investigations into certain elements of its business. (UnitedHealth) has a long record of responsible conduct and effective compliance, the company said in a Securities and Exchange Commission filing. Earlier this year, The Wall Street Journal said federal officials had launched a civil fraud investigation into how the company records diagnoses that lead to extra payments for its Medicare Advantage, or MA, plans. Those are privately run versions of the governments Medicare coverage program mostly for people ages 65 and over. The companys UnitedHealthcare business covers more than 8 million people as the nations largest provider of Medicare Advantage plans. The business has been under pressure in recent quarters due to rising care use and rate cuts. The Journal said in February, citing anonymous sources, that the probe focused on billing practices in recent months. The paper has since said that a federal criminal health care-fraud unit was investigating how the company used doctors and nurses to gather diagnoses that bolster payments. UnitedHealth said in the filing Thursday that it “has full confidence in its practices and is committed to working cooperatively with the Department throughout this process.” UnitedHealth Group Inc. runs one of the nation’s largest health insurance and pharmacy benefits management businesses. It also operates a growing Optum business that provides care and technology support. UnitedHealth raked in more than $400 billion in revenue last year as the third-largest company in the Fortune 500. Its share price topped $630 last fall to reach a new all-time high. But the stock has mostly shed value since December, when UnitedHealthcare CEO Brian Thompson was fatally shot in midtown Manhattan on his way to the companys annual investor meeting. A suspect, Luigi Mangione, has been charged in connection with the shooting. In April, shares plunged some more after the company cut its forecast due to a spike in health care use. A month later, former CEO Andrew Witty resigned, and the company withdrew its forecast entirely, saying that medical costs from new Medicare Advantage members were higher than expected. The stock price slipped another 3%, or $10.35, to $282.16 in midday trading Thursday. That represents a 55% drop from its all-time high. The Dow Jones Industrial Average, of which UnitedHealth is a component, also fell slightly. Meanwhile, the broader S&P 500 rose. UnitedHealth will report its second-quarter results next Tuesday. Tom Murphy, AP health writer
Category:
E-Commerce
President Donald Trump took to social media Thursday morning to support Elon Musk’s car company, a startling development given their bitter public feud. I want Elon, and all businesses within our Country, to THRIVE, Trump wrote on Truth Social. The post wasn’t enough to help Tesla’s stock, which fell sharply after the company reported another quarter of lackluster financial results and Musk warned of some potentially rough quarters into next year. At midday, the stock was down around 9%. Late Wednesday, Tesla said revenue fell 12% and profit dropped 16% in the April-June quarter. Many prospective buyers have been turned off by Musks foray into right-wing politics, and the competition has ramped up in key markets such as Europe and China. Investors have been unnerved by Musk’s social media spat with the president because Trump has threatened to retaliate by ending government contracts and breaks for Musk’s various businesses, including Tesla. But Trump struck a starkly different tone Thursday morning. Everyone is stating that I will destroy Elons companies by taking away some, if not all, of the large scale subsidies he receives from the U.S. Government. This is not so!” Trump wrote. The better they do, the better the USA does, and thats good for all of us. After Trump’s massive budget bill passed earlier this month, Tesla faces the loss of the $7,500 EV tax credit and stands to make much less money from selling regulatory credits to other automakers. Trumps tariffs on countries including China and Mexico will also cost Tesla hundreds of millions of dollars, the company said on its earnings call. Musk has blasted the budget bill on his own social media platform X for adding to U.S. debt at a time when it is already too large. The Tesla CEO has called the budget pushed by the president a disgusting abomination and has threatened to form a new political party. On Wednesday’s call, Musk said the electric vehicle maker will face a few rough quarters as it moves into a future focused less on selling cars and more on offering people rides in self-driving cars. He also talked up the company’s business making humanoid robotics. But he acknowledged those businesses are a ways off from contributing to Teslas bottom line. Tesla began a rollout in June of its paid robotaxi service in Austin, Texas, and hopes to introduce the driverless cabs in several other cities soon. Musk told analysts that the service will be available to probably half of the population of the U.S. by the end of the year thats at least our goal, subject to regulatory approvals. Were in this weird transition period where well lose a lot of incentives in the U.S., Musk said, adding that Tesla probably could have a few rough quarters ahead. He added, though, Once you get to autonomy at scale in the second half of next year, certainly by the end of next year, I would be surprised if Teslas economics are not very compelling.”
Category:
E-Commerce
All news |
||||||||||||||||||
|