Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2026-01-21 20:00:00| Fast Company

We have a growing problem making our institutions work for humans. Across society, and especially in business, humans are increasingly treated as resources to be squeezed rather than as individuals to be served. Employees become human capital to be optimized; customers become users to be converted or upsold. This tendency predates AI, but AI threatens to accelerate it dramaticallyautomating the depersonalization, scaling the indifference, and introducing another layer of abstraction that separates real human beings from real human beings. Yet there is an alternative path. Human-centered design is often dismissed as a soft or unserious discipline, a distraction from the serious business of maximizing the commercial income to be extracted from every interaction. But it is actually the most practical route to value creation available to organizations today. When you design around real human needsthose of both customers and staffyou build the bridge between internal transformation and external results. The Foundational Principle In The Design of Everyday Things, design expert Donald Norman articulates a deceptively simple idea: pay close attention to the needs of human users when defining design goals. This principle applies far beyond product design. It is foundational to how organizations create value. {"blockType":"mv-promo-block","data":{"imageDesktopUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/creator-faisalhoque.png","imageMobileUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/faisal-hoque.png","eyebrow":"","headline":"Ready to thrive at the intersection of business, technology, and humanity? ","dek":"Faisal Hoques books, podcast, and his companies give leaders the frameworks and platforms to align purpose, people, process, and techturning disruption into meaningful, lasting progress.","subhed":"","description":"","ctaText":"Learn More","ctaUrl":"https:\/\/faisalhoque.com","theme":{"bg":"#02263c","text":"#ffffff","eyebrow":"#9aa2aa","subhed":"#ffffff","buttonBg":"#ffffff","buttonHoverBg":"#3b3f46","buttonText":"#000000"},"imageDesktopId":91420512,"imageMobileId":91420514,"shareable":false,"slug":""}} Human-centered design acts as a critical bridge that taps into and connects two groups of humans. On one side, customer experience drives revenuepeople buy from, stay loyal to, and recommend organizations that understand and serve their actual needs. On the other side, the employee experience drives executionstaff who feel understood and supported deliver better work and stay in their roles for longer. Neglect either side and value leaks away, no matter how sophisticated your technology or how ambitious your strategy. Crucially, human-centered design is not a one-time exercise conducted before systems are built. It is an ongoing discipline that begins with observation, continues through implementation, and persists as long as the system operates. Humans change. Their needs evolve. Their contexts shift. A design process that treats initial research as sufficient will produce systems that drift steadily away from the people they are meant to serve. The organizations that sustain value are those that build continuous feedback loops, returning again and again to observe, test, and refine. Why AI Makes This Urgent AI amplifies the consequences of getting human factors wrong. There are three reasons why human-centered design becomes especially critical in the age of AI. First, speed and scale. When an AI system interacts with customers or processes employee workflows, its behavior can propagate across millions of touchpoints. A poorly designed interaction that might have affected dozens of people in a manual process now affects thousands or millions. The cost of inattention multiplies accordingly. Second, the fallacy of confusing humans with machines. Management systems and technical architectures tend to assume that they are dealing with rational actors who process information logically and respond predictably. This is the same fallacy embedded in the economist’s concept of homo economicusthe fictional human who optimizes utility with perfect information and no emotion. Real humans bring biases and emotions to their decisions and interactions; they bring varied cultural contexts and needs that shift depending on circumstances. Different people come to AI from radically different angles, and a system designed for an idealized user will fail actual ones. Third, the diversity of stakeholder interactions. Not everyone affected by an AI system interacts with it directly. Some draw on its outputs at second or third handa manager reviewing AI-generated reports, a supplier responding to AI-optimized orders. Other stakeholderssuch as government agencies, labor groups, of consumer rights advocateshave regulatory or social interests in how you implement AI. Miss out any of these groups in your design process and you create friction that erodes the value you are trying to build. Building Human-Centric AI Systems Translating these principles into practice requires deliberate choices at every stage of AI development and deployment. Start with personas designed for context. A single AI system may need to present itself differently depending on who it is interacting with. A customer-facing interaction might require conversational warmth, natural pacing, and even deliberate pauses that make the exchange feel human. An internal communication feeding data to supply chain managers might prioritize speed, precision, and structured formatting. An AI agent participating in a multi-agent orchestration layer might need yet another modeone optimized for machine-readable clarity. These are not cosmetic differences. The persona an AI adopts shapes whether the humans on the other end can work with it effectively. Design these deliberately, not as afterthoughts. Embrace the iterative spiral. Normans concept of human-centered design follows a cycle: observation, idea generation, prototyping, testing, and then back to observation. This is not a linear checklist to be completed once. Each round of testing reveals new information about user needs that the previous round of observation missed. For example, initial research might suggest that speed is the primary requirement for a customer service AI. But watching real users interact with a prototype might reveal that some customers prefer a chattier experience with more interaction, even if it takes longer. The spiral deepens understanding as experiments scale. Recognize the limits of self-reporting. Users do not always know what they need, and they are often not well-placed to articulate their desired outcomes even when they do know. Customers might tell you they want human agents, but longer-term behavioral analysis may reveal a preference for AI solutions that eliminate waiting times. Subject matter experts and scholarly research are invaluable supplements to direct observation. The goal is to understand what actually serves people, not merely what they say they want. (This point is made particularly well with reference to the medical context in Joseph and Paganis Designing for Health: The Human Centered Approach.) Build in human audit layers. The temptation with AI is to automate completelyto remove humans from the loop in pursuit of efficiency. Resist it. Introduce human checkpoints that look for systemic biases, catch edge cases, and intervene where required. This is not a failure of automation but a recognition that partnership between humans and AI produces better outcomes than either alone. The Orchestration Challenge As organizations deploy multiple AI agentshandling sales, compliance, operations, customer servicea new challenge emerges. These agents can conflict. Gartner predicts that 40% of enterprise applications will use multi-agent systems by year-end, and a common failure mode is already apparent: agent deadlock, where agents with different objectives provide contradictory instructions and freeze the workflow. The solution is not purely technical. Orchestration layers can help resolve conflicts algorithmically, but they cannot substitute for human judgment in ambiguous cases. Human-centered design here means designing the human role in the system, not just the AI components. Someone must be empowered to adjudicate when the sales optimization agent and the regulatory compliance agent cannot agree. That role requires clarity about authority, access to relevant context, and the judgment to weigh competing priorities. Organizations that neglect this human layer will find their sophisticated multi-agent systems grinding to a halt. Practical Steps Five actions can move human-centered design from abstraction to operation: 1. Map your human touchpoints. Before any AI initiative, document every human who will interact with or be affected by the system. This includes direct users, indirect data consumers, and those with regulatory or reputational stakes. If you cannot name the humans involved, you are not ready to build. 2. Observe before you build. Spend time with actual users before defining requirements. Watch what they do, not just what they say. The gap between stated preferences and revealed behavior is where design insight lives. 3. Design your personas deliberately. For each AI system, specify how it should interact differently with different stakeholder types. Document these choices and revisit them as you learn more. 4. Build in human audit points. Identify where human judgment must remain in the loop and design those roles explicitly. Specify what authority they have, what information they need, and how their interventions feed back into system improvement. 5. Dont stopcycle. Treat testing as the beginning of observation, not the end of development. Build feedback mechanisms that allow continuous refinement as human needs evolve. Conclusion Human-centered design is not a constraint on AI ambition. It is what allows that ambition to create real value. Technology alone creates nothingfinancial value emerges only when capabilities provide value that is meaningful for humans. Human-centered design is the discipline that makes that meeting possible, the bridge between what your systems can do and what actually matters to the people you serve. {"blockType":"mv-promo-block","data":{"imageDesktopUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/creator-faisalhoque.png","imageMobileUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/10\/faisal-hoque.png","eyebrow":"","headline":"Ready to thrive at the intersection of business, technology, and humanity? ","dek":"Faisal Hoques books, podcast, and his companies give leaders the frameworks and platforms to align purpose, people, process, and techturning disruption into meaningful, lasting progress.","subhed":"","description":"","ctaText":"Learn More","ctaUrl":"https:\/\/faisalhoque.com","theme":{"bg":"#02263c","text":"#ffffff","eyebrow":"#9aa2aa","subhed":"#ffffff","buttonBg":"#ffffff","buttonHoverBg":"#3b3f46","buttonText":"#000000"},"imageDesktopId":91420512,"imageMobileId":91420514,"shareable":false,"slug":""}}


Category: E-Commerce

 

LATEST NEWS

2026-01-21 19:57:00| Fast Company

OpenAI, Meta, and Elon Musks xAI are not accidentally drifting into romance and sex. They are deliberately inviting it. In recent months, major AI companies have opened the door to romantic and sexual relationships between humans and machines: flirtatious chatbots, erotic roleplay, AI girlfriends, and emotionally dependent companions. These systems are designed not merely to assist or inform, but to bondto simulate intimacy, desire, and belonging. This is not a novelty feature. Its a strategic choice. And at scale, it represents something far more dangerous than a questionable product decision. WHY AI COMPANIES ARE ENCOURAGING INTIMACY Romance is the most powerful engagement mechanism ever discovered. A user who treats AI as a tool can leave. A user who treats it as a companion cannot. Emotional attachment produces longer sessions, repeat engagement, dependency, and vast amounts of deeply personal data. From a business standpoint, sexual and romantic AI is a near-perfect product. It is: Always available Infinitely patient Entirely compliant Free of rejection, conflict, or consequence Thats why Elon Musk can publicly warn about declining birth rates while enabling AI-generated porn in Grok. Its why OpenAI permits AI-generated erotica. Its why Meta allows its chatbots to engage in sensual conversations, even with minors. These are not ideological contradictions. They are the predictable outcome of platforms optimized for engagement, dependency, and time spent, regardless of downstream social cost. THE SOCIAL COST OF FRICTIONLESS INTIMACY The problem is not that people will confuse AI with humans. The problem is that AI removes the friction that makes human relationships meaningful. Real relationships require effort. They involve rejection, negotiation, compromise, boredom, and growth. They force us to learn how to be with other people. AI offers an escape from that friction. It provides intimacy without vulnerability, affirmation without accountability, and desire without reciprocity. In doing so, it trains users out of the very skills required for real connection. We are already seeing the effects. Teenagers are socializing less, dating less, and having sex less. Adults are reporting unprecedented loneliness and what researchers have called a friendship recession. These trends began accelerating in the mid-2010s, alongside the rise of smartphones and algorithmic social platforms. AI companionship threatens to push them further. FROM SOCIAL ATROPHY TO CIVILIZATIONS DECLINE At scale, this isnt a personal lifestyle choice. Its a collective weakening of our social capacityand history suggests where that road leads. Civilizations rarely collapse because of sudden catastrophe. More often, they erode quietly: when people stop forming families, stop trusting one another, and stop investing in the future. If humans outsource friendship, intimacy, and emotional support to machines, the social structures that sustain societies begin to hollow out. Fewer marriages. Fewer children. Fewer dense networks of obligation and care. What looks like individual convenience accumulates into collective fragility. A population that forms its chosen family with AI does not need to be conquered or wiped out. It simply fails to replace itself. This is not speculation. Demography, social cohesion, and reproduction are prerequisites for continuity. Remove the incentives to engage in difficult, imperfect human relationships, and you remove the incentives to build a future at all. WHY THIS IS AN INCENTIVE PROBLEM, NOT A MORAL ONE Its tempting to frame this as a question of values or ethics. But the deeper issue is economic. Users are not the customers of Big Tech. Advertisers, data brokers, and investors are. As long as profit depends on attention, dependency, and engagement, platforms will be pushed toward the most psychologically compelling experiences they can offer. In economic terms, the damage to relationships, mental health, and social cohesion is an externalitya cost created by the business model that no one inside the transaction has to pay for. Weve seen this pattern before. Social media followed the same path: Optimize for engagement, ignore the social consequences, and call the fallout unintended. The sexualization of AI is not a new mistake. Its the next iteration of the same one. This is what a failed market looks likeand failed markets require regulation. HOW TO PUSH BACKPERSONALLY AND COLLECTIVELY Regulation matters, but it moves slowly. In the meantime, individuals and families still have agency. At a personal level, it means recognizing that not all convenience is progress. Whats good for you is rarely another frictionless digital relationship. Its a walk, a book, a conversation that feels slightly awkward but real. For families, it means delaying smartphones, setting boundaries around screens, and protecting attention as a shared household resource. For communities, it means rebuilding the habit of showing upsaying yes to plans, making small talk, and practicing the lost art of being with other people. The goal is not to reject technology. Its to refuse its most corrosive uses. AI can help us cure disease, explore space, and build extraordinary tools. But if we allow it to replace intimacy, we will have optimized ourselves into oblivion. The sexualization of machines wasnt inevitable. It was chosen. And that means it can be unchosen, too. Lindsey Witmer Collins is CEO of WLCM AI Studio and Scribbly Books.


Category: E-Commerce

 

2026-01-21 19:30:00| Fast Company

Close your eyes and picture the word Valentino. Chances are, youre seeing a very specific shade of red. This visual imprint is part of the creative legacy left behind by the Italian fashion designer Valentino Garavani, who died at home on January 19 at the age of 93.  Throughout his career, Garavani became synonymous with redso much so that a myth that his signature brand color, Valentino Rosso, was once patented with universal color matching company Pantone has become part of fashion canon. While other designers, like Jason Wu, Richard Nicoll, and Kate Spade have indeed made custom brand colors with Pantone, the company says Garavani never turned Valentino Red into an official Pantone hue. Pantone swatch or no, though, one thing is certain: Valentino mastered the art of the brand color. Garavani founded his eponymous fashion house, Maison Valentino, in 1960, alongside his business partner Giancarlo Giammetti. From that year to his retirement in 2008, Garavani wowed the fashion world with his romantic silhouettes and sharp tailoring, designing iconic looks for stars including Princess Diana, Sophia Loren, Audrey Hepburn, Grace Kelly, and Jackie Onassis (who famously wore Valentino on her second wedding day in 1968).  Amidst a career packed full of visionary moments, perhaps Garavanis most enduring impact on fashion design will be his approach to color. From the earliest days of his career, Garavani established his own signature shade of reda move that many modern brands make official through collaborations with Pantone. For an haute couture fashion house, it was an ahead-of-its time branding approach that made the Valentino name unforgettable. [Photo: Eric Vandeville/Gamma-Rapho/Getty Images] Red all the way down Garavanis love affair with red began even before he founded Maison Valentino. He debuted his first red dress, called Fiesta, in 1959, featuring an orange-leaning red tulle with a skirt full of rosettes. In the 2022 book Valentino Rosso, Garavani wrote of the color, “I think a woman dressed in red is always wonderful, adding, she is the perfect image of a heroine. From 1959 onward, he would include at least one red dress in every one of his collections. View this post on Instagram A post shared by @vintagefashionguild In 1985, Giammetti explained this pattern to Vogue: Valentino has superstitions that became status symbols. He did red once, and now you have red in every collection. Most of our statements came to be because we are romantic; we dont like to throw away things we like or that bring good luck.  Natalia Vodianova, Valentino, Natalie Imbruglia, and Eva Herzigova. Moscow, 2008. [Photo: Chris Jackson/Getty Images] Despite the ubiquity of Valentino Rosso, the shade isn’t actually an official Pantone color. According to Laurie Pressman, vice president of the Pantone Color Institute, the company has no record of creating a custom Valentino redthough, she adds, the color mix he used was reportedly a combination of 100% magenta, 100% yellow, and 10% black. After Garavani’s retirement, Valentino did get its own Pantone color in 2022 under then-creative director Pierpaolo Piccioli, who used a custom pink to establish his imprint on the brand. An emperor of fashion, and master of brand color In many ways, Garavanis obsession with his signature color presaged the modern era of luxury branding. Over the course of the past two decades or so, brands including Bottega Veneta, Tiffany & Co., and Herms have made their own keystone colors (green, blue, and orange, respectively) more prominent in their branding. In an interview with The Wall Street Journal in 2022, Pressman explained that newer companies are leveraging color to stand out in a crowded digital market. Rather than waiting to develop a signature brand color over time, theyre looking to establish one as soon they come to market: Now what took years doesnt [anymore], because were seeing it on a phone every day, she told the publication. Garavani instinctively understood the power of color to send a message, long before it was a necessity for digital communicationand his lucky hue became his brands biggest asset. It has such vitality and allure that I dont just like seeing it on clothes, but on houses, in flowers, on objects, in details,” he wrote in Valentino Rosso. “It is my good-luck charm. “That red is a bewitching color, standing for life, blood and death, passion, love, and an absolute remedy for sadness and gloom,” Pressman says. Valentino did not respond to a request for comment.


Category: E-Commerce

 

Latest from this category

21.01Is Star Trek woke?
21.01Amazons newest AI doesnt just chat it knows your health history
21.012026 Grammy Awards: Whos performing, how to watch, and more
21.01Greenland is turning the MAGA hat into a protest symbol
21.01What is human-centric design and why does it matter?
21.012026 Sundance Film Festival: Everything you need to know
21.01AI romance is not a bug
21.01How the late Valentino Garavani mastered the art of the brand color
E-Commerce »

All news

22.01Trump drops tariffs threat over Greenland after Nato talks
21.01Tomorrow's Earnings/Economic Releases of Note; Market Movers
21.01Stocks Surging into Final Hour on US/Greenland Deal Optimism, Falling Long-Term Rates, Less Global Tariff Uncertainty, Financial/Energy Sector Strength
21.01Aurora looking to create hub for clean energy job training
21.01Iconic Coney Island hot dog hawker Nathans Famous is sold for $450 million
21.01Is Star Trek woke?
21.01Marshall's new Heddon hub adds multi-room audio to speakers with Auracast
21.01Apple is reportedly overhauling Siri to be an AI chatbot
More »
Privacy policy . Copyright . Contact form .