Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 

Keywords

2025-11-17 09:30:00| Fast Company

Stories about AI-generated fabrications in the professional world have become part of the background hum of life since generative AI hit the mainstream three years ago. Invented quotes, fake figures, and citations that lead to non-existent research have shown up in academic publications, legal briefs, government reports, and media articles. We can often understand these events as technical failures: the AI hallucinated, someone forgot to fact-check, and an embarrassing but honest mistake became a national news story. But in some cases, they represent the tip of a much bigger icebergthe visible portion of a much more insidious phenomenon that predates AI but that will be supercharged by it. Because in some industries, the question of whether a statement is true or false doesnt matter much at allwhat counts is whether it is persuasive. While talking heads have tended to focus on the post-truth moment in politics, consultants and other knowledge producers have been happily treating the truth as a malleable construct for decades. If it is better for the bottom line for the data to point in one direction rather than another, someone out there will happily conduct research that has the sole goal of finding the right answer. Information is commonly packaged in decks and reports with the intention of supporting a client narrative or a firms own goals while inconvenient facts are either minimized or ignored entirely. Generative AI provides an incredibly powerful tool for supporting this kind of misdirection: even if it is not pulling data out of thin air and inventing claims from the ground up, it can provide a dozen ways to hide the truth or to make alternative facts sound convincing. Wherever the appearance of rigor matters more than rigor itself, AI becomes not a liability but a competitive advantage.  Not to put too fine a point on it, many knowledge workers spend much of their time producing what the philosopher Harry Frankfurt calls bullshit. And what is bullshit according to Frankfurt? Its essence, he says, is not that it is false but it is phony. The liar, Frankfurt explains, cares about truth, even if only negatively, since he or she wants to conceal it. The bullshitter, however, does not care at all. They may even tell the truth by accident. What matters to bullshitters isn’t accuracy but effect: how their words work on an audience, what impression they create, what their words allow them to get away with. For many individuals and firms in these industries, words in reports and slide decks are not there to describe reality or to conduct honest argumentation; they are there to do the work of the persuasive bullshitter. {"blockType":"mv-promo-block","data":{"imageDesktopUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/creator-faisalhoque.png","imageMobileUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/faisal-hoque.png","eyebrow":"","headline":"Ready to thrive at the intersection of business, technology, and humanity? ","dek":"Faisal Hoques books, podcast, and his companies give leaders the frameworks and platforms to align purpose, people, process, and techturning disruption into meaningful, lasting progress.","subhed":"","description":"","ctaText":"Learn More","ctaUrl":"https:\/\/faisalhoque.com","theme":{"bg":"#02263c","text":"#ffffff","eyebrow":"#9aa2aa","subhed":"#ffffff","buttonBg":"#ffffff","buttonHoverBg":"#3b3f46","buttonText":"#000000"},"imageDesktopId":91420512,"imageMobileId":91420514,"shareable":false,"slug":""}} Knowledge work is one of the leading providers of what the anthropologist David Graeber famously called bullshit jobsjobs that involve work that even those doing it quietly suspect serves no real purpose. For decades, product vendors, analysts, and consultants have been rewarded for producing material that looks rigorous, authoritative, and data-driventhe thirty-page slide deck, the glossy report, snazzy frameworks, and slick 2x2s. The material did not need to be good. It simply needed to look good. And if that is the goal, if words are meant to perform rather than inform, if the aim is to produce effective bullshit rather than tell the truth, then it makes perfect sense to use AI. AI can produce bullshit better and more quickly and in greater volume than any human being. So, when consultants and analysts turn to generative AI to help them with their reports and presentations, they are obeying the underlying logic and fundamental goals of the system in which they operate. The problem here is not that AI produces bullshitthe problem is that many in this business are willing to say whatever needs to be said to pad the bottom line. Bullshit vs. quality The answer here is neither new policies nor training programs. These things have their places, but at best they address symptoms rather than underlying causes. If we want to address causes rather than apply band-aids, we have to understand what we have lost in the move to bullshit, because then we can begin figuring out how to recover it. In Zen and the Art of Motorcycle Maintenance, Robert Pirsig uses the term quality to name the property that makes a good thing good. This is an intangible characteristic: it cannot be defined, but everyone knows it when they see it. You know quality when you run your hand along a well-made table and feel the seamless join between two pieces of wood; you know quality when you see that every line and curve is just as it should be. There is a quiet rightness to something that has this character, and when you see it, you glimpse what it means for something to be genuinely good. If the institutions that are responsible for creating knowledgenot just consulting firms but universities, corporations, governments, and media platformswere animated by a genuine sense of quality, it would be far harder for bullshit to take root. Institutions teach values through what they reward, and we have spent decades rewarding the production of bullshit. Consultants simply do in excelsis what we have all learned to do to some degree: produce something that looks good without caring whether it really is good. First you wear the mask, they say, and then the mask wears you. Initially, perhaps, we can produce bullshit while at least retaining our capacity to see it as bullshit. But over time, the longer we operate in the bullshit-industrial complex, the more bullshit we produce, the more we tend to lose even that capacity. We drink the Kool Aid and start thinking that bullshit is quality. AI does not cause that blindness. It simply reveals it. What leaders can do Make life hard. Bullshit flourishes because it is easy. If we want to produce quality work, we need to take the harder road. AI isnt going away, and nor should we wish it away. It is an incredible tool for enhancing productivity and allowing us to do more with our time. But it often does so by encouraging us to produce bullshit, because that is the quickest and easiest path in a world that has given up on quality. The challenge is to harness AI without allowing ourselves to be beguiled into shortcuts that ultimately pull us down into the mire. To avoid that trap, leaders must take deliberate steps at both the individual and organizationl levels. At the individual level. Never accept anything that AI outputs without making it your own first. For every sentence, every fact, every claim, every reference, ask yourself: Do I stand by that? If you dont know, you need to check the claims and think through the arguments until they truly become your own. Often, this will mean rewriting, revising, reassessing, and even flat out rejecting. And this is hard when there is an easier path available. But the fact that it is hard is what makes it necessary. At the organizational level: Yes, we must trust our people to use AI responsibly. Butif we choose not to keep company with the bullshitters of the worldwe must also commit and recommit our organizations to producing work of real quality. That means instituting real, rigorous quality checks. Leaders need to stand behind everything their team produces. They need to take responsibility and affirm that they are allowing it to pass out of the door not because it sounds good but because it really is good. Again, this is hard. It takes time and effort. It means not accepting a throwaway glance across the text but settling down to read and understand in detail. It means being prepared to challenge ourselves and to challenge our teams, not just periodically, but every day.  The path forward is not to resist AI or to romanticize slowness and inefficiency. It is to be ruthlessly honest about what we are producing and why. Every time we are tempted to let AI-generated material slide because it looks good enough, we should ask: Are we creating something of quality, or are we just adding to the pile of bullshit? That questionand our willingness to answer it honestlywill determine whether AI becomes a tool for excellence or just another engine that trades insight for appearance. {"blockType":"mv-promo-block","data":{"imageDesktopUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/creator-faisalhoque.png","imageMobileUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/faisal-hoque.png","eyebrow":"","headline":"Ready to thrive at the intersection of business, technology, and humanity? ","dek":"Faisal Hoques books, podcast, and his companies give leaders the frameworks and platforms to align purpose, people, process, and techturning disruption into meaningful, lasting progress.","subhed":"","description":"","ctaText":"Learn More","ctaUrl":"https:\/\/faisalhoque.com","theme":{"bg":"#02263c","text":"#ffffff","eyebrow":"#9aa2aa","subhed":"#ffffff","buttonBg":"#ffffff","buttonHoverBg":"#3b3f46","buttonText":"#000000"},"imageDesktopId":91420512,"imageMobileId":91420514,"shareable":false,"slug":""}}


Category: E-Commerce

 

2025-11-17 09:00:00| Fast Company

If you work in an office, your next coworker might not be human at all.  Workers are already well-acquainted with artificial intelligence in the office, using AI tools to take notes, automate tasks, and assist with workflow. Now, Microsoft is working on a new kind of AI agent that doesnt just assist, but acts as an employee.   These Agentic Users will soon have their own email, Teams account, and company ID, just like a regular coworker. Each embodied agent has its own identity, dedicated access to organizational systems and applications, and the ability to collaborate with humans and other agents, states a Microsoft product roadmap document. These agents can attend meetings, edit documents, communicate via email and chat, and perform tasks autonomously. The rise of AI has already spelled death for middle management, and is having a significant and disproportionate impact on entry-level workers in the American labor market, according to economists at Stanfords Digital Economy Lab.  Gartner projects that by 2028, 33% of enterprise software applications will incorporate agentic AI, and at least 15% of daily business decisions will be made autonomously by AI agents. If AI employees can soon take over the grunt work no one wants to do, like scheduling and reporting, leaving people to handle the big picture tasks, thats a win, right? Yet it also raises questions like: whose job is it to supervise AI employees? How much can AI really be entrusted with? And what happens if, or when, something goes wrong? Last year Deloitte surveyed organizations on the cutting edge of AI, and found just 23% of these organizations reported feeling highly prepared to manage AI-related risks. According to one study, 40% of agentic AI projects could be canceled by the end of 2027 due to inadequate risk controls, unclear business value and escalating costs.  As AI rapidly establishes itself as a workplace norm, 2025 will be remembered as the moment when companies pushed past simply experimenting with AI and started building around it, Microsoft said in a blog post accompanying its annual Work Trend Index report. The rollout of Agentic users could start later this November, according to internal documents first reported by The Register. With Microsoft Ignite this week, stay tuned. 


Category: E-Commerce

 

Sites : [65] [66] [67] [68] [69] [70] [71] [72] [73] [74] [75]

Privacy policy . Copyright . Contact form .