Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2026-02-10 17:23:17| Fast Company

In 2023, the science fiction literary magazine Clarkesworld stopped accepting new submissions because so many were generated by artificial intelligence. Near as the editors could tell, many submitters pasted the magazines detailed story guidelines into an AI and sent in the results. And they werent alone. Other fiction magazines have also reported a high number of AI-generated submissions. This is only one example of a ubiquitous trend. A legacy system relied on the difficulty of writing and cognition to limit volume. Generative AI overwhelms the system because the humans on the receiving end cant keep up. This is happening everywhere. Newspapers are being inundated by AI-generated letters to the editor, as are academic journals. Lawmakers are inundated with AI-generated constituent comments. Courts around the world are flooded with AI-generated filings, particularly by people representing themselves. AI conferences are flooded with AI-generated research papers. Social media is flooded with AI posts. In music, open source software, education, investigative journalism, and hiring, its the same story. Like Clarkesworlds initial response, some of these institutions shut down their submissions processes. Others have met the offensive of AI inputs with some defensive response, often involving a counteracting use of AI. Academic peer reviewers increasingly use AI to evaluate papers that may have been generated by AI. Social media platforms turn to AI moderators. Court systems use AI to triage and process litigation volumes supercharged by AI. Employers turn to AI tools to review candidate applications. Educators use AI not just to grade papers and administer exams, but as a feedback tool for students. These are all arms races: rapid, adversarial iteration to apply a common technology to opposing purposes. Many of these arms races have clearly deleterious effects. Society suffers if the courts are clogged with frivolous, AI-manufactured cases. There is also harm if the established measures of academic performancepublications and citationsaccrue to those researchers most willing to fraudulently submit AI-written letters and papers rather than to those whose ideas have the most impact. The fear is that, in the end, fraudulent behavior enabled by AI will undermine systems and institutions that society relies on. Upsides of AI Yet some of these AI arms races have surprising hidden upsides, and the hope is that at least some institutions will be able to change in ways that make them stronger. Science seems likely to become stronger thanks to AI, yet it faces a problem when the AI makes mistakes. Consider the example of nonsensical, AI-generated phrasing filtering into scientific papers. A scientist using an AI to assist in writing an academic paper can be a good thing, if used carefully and with disclosure. AI is increasingly a primary tool in scientific research: for reviewing literature, programming, and for coding and analyzing data. And for many, it has become a crucial support for expression and scientific communication. Pre-AI, better-funded researchers could hire humans to help them write their academic papers. For many authors whose primary language is not English, hiring this kind of assistance has been an expensive necessity. AI provides it to everyone. In fiction, fraudulently submitted AI-generated works cause harm, both to the human authors now subject to increased competition and to those readers who may feel defrauded after unknowingly reading the work of a machine. But some outlets may welcome AI-assisted submissions with appropriate disclosure and under particular guidelines, and leverage AI to evaluate them against criteria like originality, fit, and quality. Others may refuse AI-generated work, but this will come at a cost. Its unlikely that any human editor or technology can sustain an ability to differentiate human from machine writing. Instead, outlets that wish to exclusively publish humans will need to limit submissions to a set of authors they trust to not use AI. If these policies are transparent, readers can pick the format they prefer and read happily from either or both types of outlets. We also dont see any problem if a job seeker uses AI to polish their resumes or write better cover letters: The wealthy and privileged have long had access to human assistance for those things. But it crosses the line when AIs are used to lie about identity and experience, or to cheat on job interviews. Similarly, a democracy requires that its citizens be able to express their opinions to their representatives, or to each other through a medium like the newspaper. The rich and powerful have long been able to hire writers to turn their ideas into persuasive prose, and AIs providing that assistance to more people is a good thing, in our view. Here, AI mistakes and bias can be harmful. Citizens may be using AI for more than just a time-saving shortcut; it may be augmening their knowledge and capabilities, generating statements about historical, legal, or policy factors they cant reasonably be expected to independently check. Todays commercial AI text detectors are far from foolproof. Fraud booster What we dont want is for lobbyists to use AIs in astroturf campaigns, writing multiple letters and passing them off as individual opinions. This, too, is an older problem that AIs are making worse. What differentiates the positive from the negative here is not any inherent aspect of the technology; its the power dynamic. The same technology that reduces the effort required for a citizen to share their lived experience with their legislator also enables corporate interests to misrepresent the public at scale. The former is a power-equalizing application of AI that enhances participatory democracy; the latter is a power-concentrating application that threatens it. In general, we believe writing and cognitive assistance, long available to the rich and powerful, should be available to everyone. +The problem comes when AIs make fraud easier. Any response needs to balance embracing that newfound democratization of access with preventing fraud. Theres no way to turn this technology off. Highly capable AIs are widely available and can run on a laptop. Ethical guidelines and clear professional boundaries can helpfor those acting in good faith. But there wont ever be a way to totally stop academic writers, job seekers, or citizens from using these tools, either as legitimate assistance or to commit fraud. This means more comments, more letters, more applications, more submissions. The problem is that whoever is on the receiving end of this AI-fueled deluge cant deal with the increased volume. What can help is developing assistive AI tools that benefit institutions and society, while also limiting fraud. And that may mean embracing the use of AI assistance in these adversarial systems, even though the defensive AI will never achieve supremacy. Balancing harms with benefits The science fiction community has been wrestling with AI since 2023. Clarkesworld eventually reopened submissions, claiming that it has an adequate way of separating human- and AI-written stories. No one knows how long, or how well, that will continue to work. The arms race continues. There is no simple way to tell whether the potential benefits of AI will outweigh the harms, now or in the future. But as a society, we can influence the balance of harms it wreaks and opportunities it presents as we muddle our way through the changing technological landscape. Bruce Schneier is an adjunct lecturer in public policy at Harvard Kennedy School. Nathan Sanders is an affiliate at the Berkman Klein Center for Internet & Society at Harvard University. This article is republished from The Conversation under a Creative Commons license. Read the original article.


Category: E-Commerce

 

LATEST NEWS

2026-02-10 17:15:00| Fast Company

Layoffs are at an all-time high since 2009, and we’re also experiencing the lowest hiring on record in the job market. But AI spending is also reaching all-time highsespecially among Big Tech companies, who are on an extravagant spending spree. As I recently reported, Alphabet, Meta, Microsoft, and Amazon are forecast to drop a staggering $650 billion on AI in 2026 alone. And while many companies are pouring a lot of that moneywe’re talking hundreds of billionsinto building massive data centers, hoping to establish a long-term strategic advantage in the AI arms race, many are still hiring workers for jobs that utilize AI skills. So, what are those skills? While many people assume the most in-demand AI skill is coding, according to a new report, it’s actually not. Here’s a look at what recruiters and companies are looking for right now. The most in-demand AI skills A recent report from online freelance marketplace Upwork found that the AI skill for which hiring is growing fastest is AI video generation and editing (a type of design and creative work). Demand for that skill is up over 329% year over year (YoY). That refers to the ability to use AI tools to cut down on time by generating and editing video content from text, images, or audio. Some of the other AI skills that are most in demand include the following (by category): Coding and web development: Artificial intelligence integration is up 178%. Data science and analytics: Data annotation and labeling is up 154%. Customer service and admin support: E-commerce management is up 130%. Design and creative work: AI image generation and editing is up 95%. Job skills are foundational, not replaceable “While the World Economic Forum estimates that 39% of workers skills will be transformed or become redundant by 2030only a small share of complex tasks can be fully automated by todays AI,” according to the report. While workers are increasingly concerned about being displaced by AI, Upwork’s findings show companies still rank talent acquisition and retention as their top strategic priority (consistently ahead of innovation and technology adoption). This means that instead of replacing workers with AI, businesses are still prioritizing adaptable and agile learners slightly ahead of those who can build or understand AI tools (at least, for now).


Category: E-Commerce

 

2026-02-10 17:07:20| Fast Company

The 2026 FIFA World Cup will be the largest in history, and it’s meeting a growing American soccer fanbase on home turf for the first time since the ’90s. With companies paying millions to reach these fans, the challenge is standing out from the noise. On this episode of FC Explains, Fast Company Senior Staff Editor Jeff Beer explores what he’s learned from Men in Blazers co-founder Roger Bennett about how brands can leverage compelling storytelling and authentic fan culture, which sometimes matter more than the action on the field. Beer also shares insights from executives at major brands like Verizon and Anheuser-Busch about their World Cup marketing strategies to build lasting fan connections through global league sponsorships and tournament partnerships, while avoiding the “cultural wallpaper” effect that often happens at major sporting events.


Category: E-Commerce

 

Latest from this category

10.02Target CEO shuffles leadership team as his first big move after taking over
10.02Paramount sweetens its bid for Warner Bros. Discovery with additional benefits
10.02Commerce Secretary Howard Lutnick admits to meeting Epstein, reversing previous claims
10.02The type of coffee you drink may matter more for your brain than how much
10.02How sanctions are stifling Russias oil exports
10.02AI activation will define 2026
10.02Institutions are drowning in AI-generated text and they cant keep up
10.02Job hiring is growing fastest for this AI skilland its not coding
E-Commerce »

All news

10.02Stocks Slightly Lower into Final Hour on US Economic Data, Earnings Outlook Jitters, Technical Selling, Financial/Retail Sector Weakness
10.02Facebook is offering Meta AI-powered animations for profile photos
10.02Tomorrow's Earnings/Economic Releases of Note; Market Movers
10.02Bull Radar
10.02Bear Radar
10.02This Itch.io bundle to help Minnesota includes over 1,200 games and costs just $10
10.02What Makes This Trade Great: NKTR
10.02Target CEO shuffles leadership team as his first big move after taking over
More »
Privacy policy . Copyright . Contact form .