|
|||||
Media personalities and online influencers who sow social division for a living, blame the rise of assassination culture on Antifa and MAGA. Meanwhile, tech CEOs gin up fears of an AI apocalypse. But theyre both smokescreens hiding a bigger problem. Algorithms decide what we see, and in trying to win their approval, were changing how we behave. Increasingly, that behavior is violent. The radicalization of young men on social networks isnt new. But modern algorithms are accelerating it. Before Facebook and Twitter (X) switched from displaying the latest post from one of your friends at the top of your feed with crazy, outrageous posts from people you don’t know, Al Qaeda operatives were quietly recruiting isolated and disillusioned young men to join the Caliphate, one by one. But the days of man-to-man proselytizing have long since been replaced by opaque algorithms that display whatever content gets the most likes, comments, and shares. Enrage to engage is a business model. Algorithmic design amplifies the most hysterical content, normalizing extremist views to the point where outrage feels like civic participation. Its a kind of shell game. Heres how it works: Politicians and CEOs spin apocalyptic narratives Online influencers chime in Algorithms spread the most outrageous content Public sentiment hardens Violence gains legitimacy Our democracy erodes The algorithms dont just amplifythey also decide who sees what, creating parallel worlds that make it harder for us to understand our opposing tribe members. For example, Facebooks News Feed algorithm prioritizes posts that generate emotional reactions. YouTubes recommendation system steers viewers toward similar content that keeps them watching. And it’s a total mystery how TikToks For You Page keeps users glued to the app. You search for a yoga mat on your phone, and the ranking algorithms decide youre a liberal. Your neighbor searches for trucks, and the system tags them as a conservative. Before long, your feed fills with mindfulness podcasts and climate headlines, while your neighbors features off-roading videos and political commentary about overregulation. Each of you thinks youre just seeing whats out there, but youre actually looking at customized realities. Up to now, the killing of right-wing activist Charlie Kirk, along with the brutal killings of elected officials Melissa Hortman and her husband, embassy staffers Sarah Lynn Milgram and Yaron Lischinsky, United Healthcare CEO Brian Thompson, and Blackstone real-estate executive Wesley LePatner have all been tied to a rising wave of political violence. They are more likely the result of online radicalization being accelerated through social media algorithms. Given the snails pace of our judicial system, and the labor-intensive process of reconstructing someones path to radicalization online, the smoking gun is elusive. In the 2018 Tree of Life synagogue shooting, it took five years to reach a conviction. In the meantime, more people consumed extremist content giving rise to what the FBI now calls nihilistic violent extremism, which is violence driven less by ideology than by alienation, performative rage, and the quest for social status. By the time one case is resolved, new permission structures for violence take root, showing just how powerless our legal system is at policing social media platforms. What drives these communities isnt ideology so much as a search for belonging, status, and personal power. The need for validation is intertwined with whatever or whoever is commanding the most attention at any given moment. These days, the issue that has captured the most attention is an AI apocalypse. As new grievances take shape around artificial intelligence and national fears of job loss, technology executives are increasingly exposed to threats of physical violence, says Alex Goldenberg, director of intelligence at Narravance, which monitors social media in real time to detect threats for clients. Are predictions of AI joblessness stoked by algorithmic fear-mongering a recipe for social unrest? While high-profile tech CEOs have long traveled with security details, new data suggests those threats have extended to all corporate sectors. A study of over 2,300 corporate security chiefs at global companies with combined revenues exceeding $25 trillion found that 44% of the companies are actively monitoring mainstream social media, the deep web (content not indexed by Google), and the dark web (where criminals and dissidents go for cover). Two-thirds of those companies are increasing their physical security budgets in response to rising online threats, according to the study by security company Allied Universal. Before December, fewer than half of CEOs had any kind of executive protection. Now boards are demanding it, says Glen Kucera, president of Allied Universal. Executives make up 30% of a companys value, and shareholders want them protected. Companies are responding by hardening their perimeters, hiring armed escorts and social media threat analysts, and addressing vulnerabilities at executives homes. For CEOs, AI is both a windfall and a minefield. Its too lucrative to ignore, but too unsettling to discuss freely. High-profile people making controversial announcements about AI are at higher risk, says Kucera. According to Michael Gips, managing director at multinational financial and risk advisory firm Kroll, these findings fit into a broader trend, Were living in a grievance culture now, he says. If theres something to be grieved about, the risk is there. Even the people shaping this technology acknowledge its risks. Sam Altman, the CEO of OpenAI, has said he believes the worst case for AI is lights out for all of us. Elon Musk has made similar warnings, cautioning that theres some chance that [AI] goes wrong and destroys humanity. OpenAI cofounder Ilya Sutskever repotedly talked about building a doomsday bunker for OpenAI engineers in the post-AGI world. Narravance analysts say apocalyptic narratives around AIespecially those centered on job losspromote online radicalization. After reading dystopian narratives about AI-driven unemployment, 17.5% of U.S. adults in a statistically significant sample said violence against Musk is justified. Musks remark about universal job loss spread rapidly across social platforms, stripped of nuance, meme-ified, and reframed as a prophecy of societal collapse. In online communities where people are hungry for belonging and validation, Musks rhetoric becomes the basis of permission structures that rationalize violence. Prior to his resignation from the Department of Government Efficiency (DOGE), negative sentiment toward Musk was higher. In March 2025 nearly 32% of Americans said they believed his assassination would be justified, according to another Narravance study. On Sam Altmans blog, the OpenAI CEO wrote, The development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. The more tech leaders issue dire predictions, the more support for unjustified violence against them grows. Alarmingly, Narravance also found that respondents said violence would be justified against Alex Karp, CEO of surveillance and defense AI company Palantir (15.4%), Meta CEO Mark Zuckerberg of (14.5%), Amazon CEO Jeff Bezos (13.8%), and OpenAI CEO Sam Altman (13.3%). Fear of obsolescence As soon as Charlie Kirk was assassinated, a video went around the world. Ten-year-olds saw it within hours, said Jonathan Haidt, author of The Anxious Generation, at the Fast Company Innovation Festival. Haidt argues that since 2012 the share of adolescents who say their lives feel useless has more than doubled, and that boys in particular, left without traditional guidance and immersed in social media, gaming, and pornography, are struggling to find a path to adulthood. If you’re a boy, and your life feels useless, and you see no future, everything is about getting fame or money. You have to get rich quick or become famous, otherwise youll lose in the mating game, says Haidt. Boys around the world, historically, have gambled. Do something big. Get recognition, he says. A former senior social media executive who spoke on the condition of anonymity said negative narratives create desperation. When you give people doom scenarios, theyre going to be willing to do outrageous things, he says. Its an unfortunate by-product of the social media business. Social media meltdown Social media is a cancer, Utah Governor Spencer Cox said on 60 Minutes a few weeks after Kirks murder. Its taking all of our worst impulses and putting them on steroids . . . driving us to division and hate. These algorithms have captured our very souls. His dire warning underscores how platforms reward outrage, feed polarization, and erode the boundaries that once kept political disagreement from spilling into violence and chaos. In another interview, on Meet the Press, Cox argued that social media companies have hacked our brains, getting people addicted to outrage in ways that fuel division and erode agency. He said he believes that social media has played a direct role in every assassination or attempt in the past five to six years. The conflict entrepreneurs are taking advantage of us, and we are losing our agency, and we have to take that back, he said. When outrage gets amplified, all engagement looks like an endorsement, people mistake that as truth, even though it may be false or, worse yet, coordinated inauthentic activity spun up by the Chinese controlled TikTok algorithm or Russian bot farms. According to a report from safety research nonprofit FAR.AI, with artificial intelligence already more persuasive than humans, and frontier LLMs guiding political manipulation, disinformation, and terrorism recruitment efforts, the risks are already multiplying exponentially. Predictions of a dystopian, jobless AI future pale by comparison. The real threat is the erosion of human judgment itself. The existential risk of AIfirst raised in 1975 by computer scientist Joseph Weizenbaum in his prescient book Computer Power and the Human Reasonis not joblessness or humanity suspended in Matrix-style bio-pods. The danger isnt sentient machines. Its algorithms engineered to keep us engaged, enraged, and endlessly divided. The apocalypse wont come from code, but from our surrender to it.
Category:
E-Commerce
Leaving your corporate job for a solopreneur path is a bold moveand it can feel terrifying. But as long as youre prepared, it can be a smart move, especially in the current rocky job market. I worked at one corporate job for 15 years. Then I pivoted to a new career in marketing. Eighteen months later, I was working for myself as a full-time freelance writer. Within two months of going solo, I had replaced my salary at a marketing agency, but Id also taken a lot of baby steps in advance of making the switch. You can make the transition to solopreneurship easier if you build a safety net before you walk out the corporate door. Heres how. {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/04\/workbetter-logo.png","headline":"Work Better","description":"Thoughts on the future of work, career pivots, and why work shouldn't suck, by Anna Burgess Yang. To learn more, visit workbetter.media.","substackDomain":"https:\/\/www.workbetter.media","colorTheme":"blue","redirectUrl":""}} Calculate how much income youll need The first step is to be brutally honest with yourself: How much of a reduction in pay can you stand? Odds are, youll have an in-between period: Youll have left your corporate job, but not built up enough of a solo business yet. Can you withstand 25% of your current salary? 50%? Do you have savings to supplement the rest? I know some people who wont leave corporate jobs until they earn enough with a side hustle. But thats incredibly difficult, since youll basically be working two jobs for a period of time. However, if thats the only way to make it work for your finances, its an option. Youll also need to consider that youll pay self-employment tax. A general rule of thumb is to set aside 25% to 30% of your earnings. Youll also be paying your own expenses, like any apps or tools you need to run your business. When youre thinking about how much you need to earn, take your costs into account. Build your network If youre going solo, your network is a substantial asset during your ramp-up period (and beyond). The people you know become your clients, your referrals, your sounding board for ideas. I started posting on LinkedIn consistently a full 18 months before I struck out on my own. At the time, I had no idea that I would become a solopreneur. It just seemed like a good idea to build a network since Id started a new career. While youre still at your 9-to-5 job: Start connecting with industry peers, potential clients, and former coworkers. Join groups (like professional associations or Slack communities) where your future clients hang out. Show up on LinkedIn, adding value and building credibility. Even though youre still working your 9-to-5 job, you should gradually reframe your personal brand. You want to become known as the person who can solve XYZ problem. That way, by the time you leave your job, youve planted the seeds for your solo business. Side hustle, if you can If your job and life allow, keep one foot in your corporate role and build your solo business on the side. This gives you some huge advantages. You can test out your pricing, positioning, and processes without the pressure of needing to replace your salary. Youve also got a revenue buffer since your 9-to-5 will keep all of your bills paid. If you put all of the money from your side hustle aside, you might have a nice cushion once youre ready to launch. I started freelancing alongside my 9-to-5 job two years before I became a solopreneur. I was able to build a portfolio of work and collect client testimonialsboth of which helped immensely when I announced that I was starting a full-time writing business. Yes, it means extra hustle. I was juggling my 9-to-5 job, three kids, and a raging global pandemic. But I told myself that it was temporary. Sometimes you dont get to choose the timing Ideally, you get to choose the timing of your exit from the corporate world. But sometimes its chosen for you. I was laid off from my full-time marketing job. Even though Id been thinking about full-time freelancing for months, I kept telling myself I wasnt ready to make the leap. Because Id been building in the background, I was able to make a fairly seamless transition. The timing wasnt my decision, but it was the direction I was headed. I wasnt starting from zero. The more momentum and clarity you build for your solo business, the more options youll have when the moment finally arrives. {"blockType":"creator-network-promo","data":{"mediaUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/04\/workbetter-logo.png","headline":"Work Better","description":"Thoughts on the future of work, career pivots, and why work shouldn't suck, by Anna Burgess Yang. To learn more, visit workbetter.media.","substackDomain":"https:\/\/www.workbetter.media","colorTheme":"blue","redirectUrl":""}}
Category:
E-Commerce
In a new legal filing, Meta is being accused of shutting down internal research that showed people who stopped using Facebook experienced less depression, anxiety, and loneliness. The allegations come as part of a lawsuit filed by several U.S. school districts against Meta, Snap, TikTok, and other social media companies. The brief, which was filed in the U.S. District Court for the Northern District of California but is not yet public, reportedly claims the study, called Project Mercury, was initiated in 2019 and was meant to explore the impact of apps on polarization, news-consumption habits, well-being, and daily social interactions. Plaintiffs in the suit say social media companies were aware that these platforms had a negative impact on the mental health of children and young adults but did not act to prevent it. The suit also alleges they misled authorities about this harm. We strongly disagree with these allegations, which rely on cherry-picked quotes and misinformed opinions in an attempt to present a deliberately misleading picture, Meta tells Fast Company in a statement. “The full record will show that for over a decade, we have listened to parents, researched issues that matter most, and made real changes to protect teenslike introducing Teen Accounts with built-in protections and providing parents with controls to manage their teens experiences. Andy Stone, Meta’s communications director, downplayed the study in a social media post. “What it found was people who believed using Facebook was bad for them felt better when they stopped using it,” he wrote in a thread on Bluesky. “This is a confirmation of other public research (‘deactivation studies’) out there that demonstrates the same effect. It makes intuitive sense but it doesnt show anything about the actual effect of using the platform.” While the company’s research showed people who stopped using Facebook for a week reported lower feelings of depression, anxiety, loneliness, and social comparison, Meta chose not to publish those findings and shut down work on the project, Reuters reports. The company never publicly disclosed the results of its deactivation study, the suit reads. Instead, Meta lied to Congress about what it knew. Stone, in his social media thread, implied the study was flawed and the company’s disappointment wasn’t with the results, but in its apparent failure to overcome expectation effects, the idea that beliefs and expectations influence perception.” The filing, though, shows that some staffers rejected Meta’s belief that the findings were influenced by the existing media narrative around the company, with one allegedly saying that burying the research was no different than the tobacco industry doing research and knowing cigs were bad and then keeping that info to themselves. Meta has filed a motion to strike the documents at the heart of the Project Mercury allegations. The judge overseeing the case has set a hearing date for those arguments on January 26. Meta has been accused of ignoring similar research in the past. Two years ago, the company was sued by 41 states and the District of Columbia, who accused it of harming young people’s mental health. The collective attorneys general alleged the company had knowingly designed features on Instagram and Facebook that addict children to its platforms and violated the federal Childrens Online Privacy Protection Act (COPPA). In 2022, up to 95% of children ages 13 to 17 in the U.S. reported using a social media platform, with more than a third saying they use social media almost constantly, according to the Pew Research Center. To comply with federal regulation, social media companies generally prohibit kids under 13 from signing up to their platforms. Children have easily found ways around those bans, however. That has led some countries, including Australia and Denmark, to ban anyone under 16 from having social media accounts.
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||