|
|||||
Youre at your usual weekly team meeting. The team leader asks for ideas, and you immediately come up with the best one. Its not just clever. Its perfect. You rush to say it, glowing with anticipation. Silence. Nobody reacts. You walk out deflated, wondering how a group of smart people could ignore the obvious answer. The assumption is simple. If the idea is sound, it should carry weight. We tend to believe that the one with the best ideas has the greatest impact. We take for granted that influence flows from competence and that those who are right, early and often, naturally shape decisions. But decades of research in social psychology and decision science tell a different story. In group settings, being right doesnt automatically translate into influence. In fact, one of the reasons ideas fail to land is that being right too early can undermine your influence. Heres why even brilliant ideas face immediate resistance. 1. The Ego Threat You may think youre helping, but solving the puzzle first can make others feel small. People don’t just want the answer. They want the sweat equity of finding it together. Theyre not rejecting your idea because its bad. Theyre pushing it away because it feels forced on them, not discovered together. They feel threatened rather than persuaded. 2. Logic vs. Shortcuts Wouldnt it be wonderful if our ideas were judged purely on logic and data (diagnostics)? But most of us are busy, tired, distracted, or just want to move on with our next task. So, groups often rely on shortcuts such as who sounds confident, who talks the most, whos more assertive (proxies). Such shortcuts may drive quick decisions, but they rarely lead to results. If you drop a maverick idea before the group is ready, youre basically asking an overwhelmed group to do the hard work of thinking outside the box. Chances are theyll rely on proxies rather than substance such as diagnostics. Influence isnt about having the loudest voice. Its about having the best-timed one. 3. The Consensus Comfort Zone Groups love sticking to what feels familiar. Its safer and lets everyone feel like theyre working together. If you toss out a big, unusual insight right away, you dont look like a visionary. You look like youre playing a different game than the rest. The team will reject the disruption because unconsciously they protect the direction and rhythm of the group. How to Make Ideas Land To stop being the “ignored expert” and start being the influential leader, you need to stop selling facts and start managing social currency and timing. Heres what works: 1. Practice strategic silence Dont jump in with the solution. Practicing strategic silence means that you first consider issue relevance, issue readiness, and target responsiveness, before speaking up. Let the group feel the problem. Listen to others perspectives. When you finally speak, tie your answer to what they care about in that moment. Now your idea will feel more like a relief. 2. Show the “why,” not the “what” If you just drop the answer, youre asking people to trust your brain rather than the facts. Instead narrate your logic. By sharing your logic and the why, youre giving them the map you used plus time to process. Now theyre on the same page as you. 3. Lower the ego shield Present your idea as a 90% complete thought and leave the last 10% for the group to solve. For example, you can ask questions like What obstacles do you see?, or What would make this easier to implement? Youre not lowering your confidence. You invite collaboration. In return, you arent just right anymore; youre the person who helped the team find the right answer together. Accuracy is essential, but social recognition is the currency of influence. Start thinking less about winning with facts and being the first to offer a solution. Think more about how people want to arrive at a conclusion with you.
Category:
E-Commerce
Much like its peers in the tech industry, Oracle is pouring money into AI infrastructure. The tech giant inked a lucrative $300 billion deal with OpenAI last year to build out AI data centers, in a bid to compete with companies like Amazon and Microsoft. But the deal requires Oracle to spend a significant amount of money upfronta move that is now pushing the company to cull its workforce. According to recent reports, Oracle is planning major layoffs that would reportedly affect thousands of jobs. The company had already earmarked about $1.6 billion for restructuring costs this yearlargely due to employee severance costsindicating there would be job cuts. As of February, that sum has now increased by $500 million, bringing overall restructuring costs to $2.1 billion. Bloomberg has reported that the layoffs would impact many parts of the business and could take effect this month; some of the job losses will also target roles that AI is rendering less essential. The forthcoming job cuts were framed as broader than Oracles usual rolling approach to layoffs; the company typically avoids large-scale layoffs that merit a public announcement. Oracle would also effectively freeze hiring in its cloud division, according to Bloomberg. Oracle joins a growing list of companies that are trimming headcount due to AIbut as with many other employers, theres limited evidence that the company is replacing workers with AI en masse. Instead, these layoffs largely seem to be driven by Oracles extensive investments in AI, which could take years to pay off. Oracle is currently raising $50 billion in debt and equity to finance its AI aspirations, and analysts have said the company will likely continue losing money on this venture until 2030. Last month, Jack Dorsey announced major layoffs at his fintech company Block, which drew widespread consternation. Dorsey framed those job losses, which affected 40% of the companys workforce, as the direct result of efficiency gains from AI. But many companies have also used AI as a convenient explanation for more pedestrian cost-cutting measures, even as economists have argued that AI is not yet displacing workers on a large scale. Some companies have cited AI rather than blaming issues like immigration policy and tariffs, which might not be as politically expedient or appealing to shareholders. Others, like Oracle, are slashing jobs over AIbut not necessarily because theyre outright using AI to replace workers. Microsoft, too, has made sweeping investments in AI, spending tens of billions of dollars on data centers while laying off over 15,000 in 2025. The layoffs at companies like Microsoft and Amazon have also targeted middle managers, the sorts of jobs that cant exactly be replaced by AI at the moment. The AI boom has also helped cement an era of forever layoffs, in which even big tech jobs no longer hold the promise of stability. Since the pandemic, tech employers have become especially reliant on layoffsa trend that has been accelerated with the rise of AI. Whether or not workers are getting explicitly displaced or ousted due to automation, few jobs are now safe if companies value AI over human capital.
Category:
E-Commerce
With its many extraterrestrial guest stars, The X-Files was always meant to be a spooky show. One of its earliest episodes, however, is now eerie in a way its creators likely never intended. In Ghost in the Machine, a first-season standout that originally aired in 1993, a sentient, corporate-created AI turns deadly when it perceives a threat to its existence. That description may rightly sound near-identical to any number of previous killer-computer plotlines2001: A Space Odyssey being the most obvious touchstone, along with Terminator 2, which had come out just two years earlier. What sets this X-Files episode apart from other entries in the lethally sentient AI canon is that it pits a safety-minded tech CEO against a belligerent U.S. Department of Defense, which is desperate to use this companys AI in guardrail-free combat operations. Sound familiar? A ghost in the machine Across its nine original seasons, two feature films, and a reboot, The X-Files cultivated an overarching mythology. The shows creators wisely took frequent off-roading adventures, though, with standalone Monster of the Week episodes that helped keep fans on their toes. Ghost in the Machine is one such excursion, only the monster in this case turned out to be AI. The show begins with the CEO of too-cutely named software company Eurisko (you risk-o?) writing a memo about shutting down the Central Operating System AI that runs corporate HQ. Unfortunately, because the AI is surveilling the entire building, it picks up on this plan and chooses instead to shut down with extreme prejudice the CEO himselfvia electrocution. Enter FBI special agents Fox Spooky Mulder (David Duchovny) and Dana Scully (Gillian Anderson). Their investigation quickly leads them to Euriskos founder, Brad Wilczek, who is initially willing to take the fall for his CEOs murder. By digging a bit deeper, though, Mulder discovers that not only is Euriskos AI the true culprit, the Department of Defense has been trying to get its hands on that AI for years, only to be snubbed each time by Wilczek. (It’s a learning machine, one character says. A computer that actually thinks. And it’s become something of a holy grail for our more acquisitive colleagues in the Department of Defense.) Eventually, Mulder and Scully work with Wilczek to fry the AI, much to the chagrin of a Defense Department mole who has been working at Eurisko the whole time. File closed! Back in 1993, Ghost in the Machine fit snugly into the paranoid truth is out there ethos of a sci-fi show about alien conspiracies. Now, its not closer to the realm of documentary. Although the show would return to the subject of AI again 25 years later in one of the reboot episodes2018s Rm9sbG93ZXJz, a more Black Mirror-y spin on fearing ones smartphoneits the older and admittedly cheesier outing that is far more relevant in 2026. Its most glaring point of prescience, of course, is that it appears to have predicted with spooky accuracy the recent battle between the U.S. government and AI heavyweight Anthropicnot to mention the governments use of AI in its current war with Iran. Our more acquisitive colleagues in the Department of Defense Unlike his fictional counterpart in The X-Files, Anthropic cofounder Dario Amodei was very much interested in lending his AI model to Uncle Sam. Last July, Anthropic signed a $200 million contract with the U.S. Department of Defense to provide its Claude model for use in classified and operational work. It was only when negotiations began over what such work might actually entail that irreconcilable differences emerged. As the back-and-forth dragged on through late 2025 and into this January, the major sticking points involved Anthropics demand of usage restrictions on Claudemainly, that it shouldnt be deployed for mass domestic surveillance or for building fully autonomous weapons without human oversight. The Pentagon insisted otherwise. Heres where the similarities between Amodei and Euriskos Wilczek get really interesting. (The fact that Amodei bears something of a physical resemblance to Wilczek cant be ignored either.) Why did the fictional founder want to protect civilian populations from the U.S. Defense Department using his AI? He explains it himself in the following exchange with Mulder: Wilczek: After the bomb was dropped on Hiroshima and Nagasaki, Robert Oppenheimer spent the rest of his life regretting he’d ever glimpsed an atom. Mulder: Oppenheimer may have regretted his actions but he never denied responsibility for them. Wilczek: He loved the work, Mr. Mulder. His mistake was in sharing it with an immoral government. I won’t make the same mistake. Amodei publicly presents himself in a similar light, if with less on-the-record talk about government immorality. He has frequently recommended Richard Rhodess book The Making of the Atomic Bomb in interviews, reportedly used to give copies of the book to new employees, and keeps one on prominent display in the Anthropic library. Though Amodeis peer, OpenAI founder Sam Altman, has also spoken often of Oppenheimer as a cautionary example, Amodei has now proven more willing to stick to his guns on the issue. In recent weeks, Defense Secretary Pete Hegseth gave Anthropic an ultimatum to drop its demand for safety guardrails or face consequences. Anthropic refused. As a result, Hegseth made good on his threat, formally designating Anthropic a supply chain riskthe first time the Pentagon has applied that label to a U.S. AI firm. Anthropic has since sued the Pentagon over this measure. As a bonus, the White House labeled Anthropic a radical left, woke company, and President Trump directed all federal agencies to stop using Claude. Meanwhile, former Oppenheimer-recaller Altman has agreed to let OpenAI fill the military void, albeit with guardrails, according to the company. AI at war The X-Files episode Ghost in the Machine ends with the Department of Defense thwarted and its desired AI, which has ostensibly been destroyed, telegraphing to viewers it is still alive, so to speakthe epilogic hand flying out of a grave in a horror movie. In real life, though, the government got a hold of its AI without the need for any innuendo. Despite the formal ban on federal use of Anthropics tools, parts of the U.S. military continue to rely on Claude in combat operations, since they were already deeply embedded. (Removing them completely could take months.) In the meantime, according to the Wall Street Journal, the current war with Iran is demonstrating Claudes usefulness. AI tools are helping gather intelligence, pick targets, plan bombing missions and assess battle damage at speeds not previously possible, the report reveals. AI helps commanders manage supplies of everything from ammunition to spare parts and lets them choose the best weapon for each objective. On February 28, at the start of the U.S.-Israel war on Iran, a Tomahawk missile struck an Iranian elementary school, claiming the lives of at least 175 peoplemost of them children. Recent reporting strongly suggests that not only was the U.S. at fault for the missile strike, but that the school was on a U.S. target list and may have been mistaken for a military site. As of this writing, nobody in the U.S. government has claimed responsibility for the mistake. The X-Files episode and movies like Terminator 2 stoked the fear that a sentient AI might decide to wipe out all of humanity. They couldnt foresee the more immediate threat in 2026: that an immoral government would decide to wipe out a portion of humanity and let AI take the blame.
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||