|
|||||
Transportation Secretary Sean Duffy this week launched a new advisory council that could reshape American transportation in President Donald Trump’s aesthetic preferences. The U.S. Department of Transportation’s newly created Beautifying Transportation Infrastructure Council held its inaugural meeting February 2, and quickly outlined plans to make a highly influential mark on the look and design of U.S. transportation infrastructure. The council could impact an array of initiatives including interstate highways, bridges, transit hubs, and airports, and has been established to provide recommendations on the policies, designs, and funding priorities of the DOT. Though the council was created to serve an advisory role with no decision-making or funding authority, it currently has two major agenda items that could form the basis of a widespread makeover of American transportation infrastructure. The first is the oversight of a national conceptual design competition that is seeking innovative thinking around transportation infrastructure design. The second is the creation of a design guidebook that would set new aesthetic recommendations for the design and renovation of federally controlled transportation projects. Its tentative title: “Beauty and Transportation.” On the surface, these efforts seem open to a variety of design approaches, however the October announcement of the council states that the advisory effort will “align” directly with the aesthetic preferences laid out in Trump’s August 2025 executive order “Making Federal Architecture Beautiful Again.” That order defines the traditional and classical architecture of ancient Athens and Rome as the basis of a preferred architectural style for federal buildings. This aesthetic preference is likely to influence whatever comes out of the Beautifying Transportation Infrastructure Council. Its chair is Justin Shubow, president of the National Civic Art Society, the Washington, D.C., nonprofit that champions classical architecture and which helped write Trump’s executive order to make traditional architecture the preferred style for federal buildings. “That order called for new federal buildings to be beautiful, uplifting, and admired by the common person. It reoriented architecture away from modernism toward the classical and traditional design that is so appreciated and often preferred by ordinary people,” Shubow said during his opening remarks at the council meeting. “This council, I believe, should not recommend that any particular style be mandated, but it should make clear that classical and traditional design are legitimate options.” Council guidelines The council has set additional guidelines to govern its work. Shubow noted that the Transportation Department has drafted five preliminary principles to help shape the council’s advice and the creation of its design guidebook. These include ideals that “transportation infrastructure should be designed to uplift and inspire the human spirit and lend prestige to the nation,” and that it “should foster a sense of place and inspire national and community pride in a way that builds upon the past.” The council’s members include architects, landscape architects, state transportation officials, engineers, and construction specialists. None were overtly dogmatic about design preferences, at least during this initial meeting. Shubow has cited projects like San Franciscos Golden Gate Bridge and Cincinnati’s Union Terminal as exemplars of the kinds of designs the council might encourage. But council members also spent time talking about a wider range of aesthetic approaches to transportation design, including the importance of artistic lighting under bridges and the use of regionally appropriate wildflowers along highways. One member, Bryan Jones, mid-Atlantic division president of the engineering and construction firm HNTB, pointed to one of his firm’s recent projects, the swooping Sixth Street Viaduct in Los Angeles, a decidedly modern structure. Official and unofficial timelines Timelines for the design competition and guidebook have not been set. The council will have its next public meeting in the summer, and will meet in private subcommittees in the meantime. As Trump engages in a range of rebuilding and construction efforts in Washington, D.C., the work of the council may already be starting, if unofficially. Duffy was on hand to kick off the council’s inaugural session, but had to leave early to go to the White House. He had another meeting with Trump to discuss the potential redesign of Dulles International Airport, “a beautiful project that he wants to look at,” to “revamp in a great way,” Duffy said.
Category:
E-Commerce
AI isnt eliminating human work. Its redistributing human judgment, away from routine tasks and into the narrow zones where ambiguity is high, mistakes are costly, and trust actually matters. This shift helps explain a growing disconnect in the AI conversation. On one hand, models are improving at breathtaking speed. On the other, many ambitious AI deployments stall, scale more slowly than expected, or quietly revert to hybrid workflows. The issue isnt capability. Its trust. The trust gap most AI strategies overlook AI adoption doesnt hinge on whether a system can do a task. It hinges on whether humans are willing to rely on its output without checking it. That gap between performance and reliance, the trust gap, is what ultimately determines where AI replaces work, where it augments it, and where humans remain indispensable. Two factors shape that gap more than anything else: ambiguity and stakes. Ambiguity refers to how much interpretation, context, or judgment a task requires. Stakes refer to what happens if the system gets it wrong: financially, legally, reputationally, or ethically. When ambiguity is low and stakes are low, automation thrives. When both are high, humans must stay firmly in the loop. Most real-world work lives somewhere in between and thats where the future of labor is being renegotiated. A simple way to see where AI fits Think of work along two axes: how ambiguous it is, and how costly errors are. Low ambiguity, low stakes tasks, basic classification, simple tagging, routine routing, are rapidly becoming fully automated. This is where AI quietly replaces human labor, often without much controversy. Low ambiguity but high stakes tasks, such as compliance checks or identity verification, are typically automated but closely monitored. Humans verify, audit, and intervene when something looks off. High ambiguity, low stakes work: creative labeling, sentiment analysis, exploratory research, which tends to use AI as an assistant, with light human oversight. But the most important quadrant is high ambiguity and high stakes. These are the tasks where trust is hardest to earn: fraud edge cases, safety-critical moderation, medical or financial interpretation, and the data decisions that shape how AI models behave in the real world. Here, humans arent disappearing. Theyre becoming more targeted, more specialized, and more on demand. When the human edge actually disappears Interactive voice response systems refine the rule. The stakes were not low, IVR is literally the companys voice to its customers. But ambiguity was. Once synthetic voices became good enough, quality was easy to judge, variance was low, and the trust gap collapsed. That alone was sufficient for AI to take over. When trust keeps humans in the loop Translation followed a different trajectory. Translation is inherently ambiguous, as there are multiple ways to translate a sentence. As a result, machine translation rapidly absorbed casual, low-risk content such as TikTok videos. However, in high-stakes contexts, such as legal contracts, medical instructions, financial reporting, and global brand messaging, trust is never fully transferred to the machine. For these tasks, professional translators are still required to augment the AI’s initial output. Since AI now performs the bulk of the work, full-time translators have become rare. Instead, they increasingly operate within expert networks, deployed just-in-time to fine-tune and verify the process, thereby closing the trust gap. The same shift is now playing out in how data is prepared and validated for AI systems themselves. Early AI training relied on massive, full-time human labeling operations. Today, models increasingly handle routine evaluation. Human expertise is reserved for the most sensitive decisions, the ones that shape how AI behaves under pressure. What this means for the future of work The popular narrative frames AI as a replacement technology: machines versus humans. The reality inside organizations looks very different. AI is becoming the default for scale. Humans are becoming the exception handlers, the source of judgment when context is unclear, consequences are severe, or trust is on the line. This doesnt mean fewer humans overall. It means different human roles: less repetitive labor, more judgment deployed just in time. More experts working across many systems, fewer people locked into single, narrowly defined tasks. The organizations that succeed with AI wont be the ones that automate the most. Theyll be the ones that understand where not to automate, and that design workflows capable of pulling human judgment in at exactly the right moment, at exactly the right level. The future of work isnt humans versus machines. Its AI at scale, plus human judgment delivered through expert networks, not permanent roles. Translation and model validation show the pattern; white-collar work is next. And that, quietly, is what companies are discovering now.
Category:
E-Commerce
AI can do incredible things. So far, though, most of those things have been virtual. If you want a killer article for your bichon frise blog or an expertly crafted letter disputing a parking ticket you probably deserve, chatbots like ChatGPT and Gemini can deliver that. All those things are locked into the nebulous world of information, though. Theyre helpful, but the products of todays large language models (LLMs) and neural networks arent actually doing much of anything. AIs silicon-bound status, however, is beginning to change. The tech is increasingly invading the real world. 2026 is the year that AI gets physical. And that shift has huge implications for the future of the technologyand for the impact when it fails. Call Me a Robot The change started with cars. The idea of a self-driving car goes back to the 1950s. But the technology always felt like it was decades away. Now its here. Robotaxi companies like Waymo and Zoox give more than 450,000 rides per week to paying customers. I ride in Waymo vehicles all the time, and I love calling a robot from an app and having it drive me across town. Self-driving cars finally arrived because of a whole slew of things, including cheap lidar scanners and better batteries. But the rise of deep learning and AI played the most pivotal role. The AI models that power Waymo vehicles are much better at driving than humans. And they can learn and improve on the flyhere in San Francisco where I live, Waymos have gotten more assertive as theyve learned the roads better. Self-driving AI is getting so good that its increasingly able to navigate roads without the need for the fancy (and expensive) sensors you see atop first-generation Waymos. Tesla uses simple cameras, and is getting closer to true self-driving. Fold My Laundry, Siri Self-driving cars are an incredible application of physical AI. But theyre hardly the only one. Driving is a great initial test case for the tech, because it has fairly clear rules and limits. Cars need to stay on the road, recognize red lights, and minimize cat fatalities. Other physical tasks are harder to automate with AI. But they have potentially even bigger upsides. Companies are increasingly pairing artificial intelligence with humanoid robots, teaching the robots artificial brains about the physical world so they can navigate it capably. The ultimate dream is to put these robots to work. They could perform a wide variety of jobs in factories or warehouses, for example. Generally speaking, current industrial robots need to be specifically built for a single task, but an AI powered one could learn multiple onesassembling a product and then placing it on a shelf, for example. But AI-powered robots could also fill gaping holes in the human labor market. Caretaking for the elderly is incredibly important as the world gets older on average. Yet finding enough people for caretaking roles is nearly impossible. Especially in countries like Japan, robots are beginning to fill the gaps. Dexterous, AI-powered robots may soon work well enough for tasks like doing dishes, folding laundry, or even cooking to be automated. These robot companions could help elderly people live on their own more independently. With advanced LLMs, they could even form relationships with their real-world charges, helping with loneliness or reminding a person with memory challenges to take their meds on schedule. The Parable of the Raunchy Bear Of course, all of this comes with risks. When an LLM hallucinates in a virtual space, its annoying but rarely damaging. If your ChatGPT-generated recipe for meatballs sucks, you probably wont die. And if the chatbot writing your blog post confuses a bichon for a poodle, your dog will be very angry with you, but otherwise the consequences are minor. Physical AI is different. Clearly, if Waymos technology goes awry, it could accidentally steer a 5,000-pound object into a building or a bystander. And youve read enough science fiction that I dont need to remind you about robot uprisings. Many of these risks are well understood, though, and thus well controlled. Power outages aside, Waymos rarely run into serious challenges on the road, and industrial robots rarely injure people. The bigger risks start to creep in when AI is applied haphazardly to the physical world without a lot of oversight or planning. As physical AI expands and LLMs get cheaper, this will happen more often. Take the case of an AI teddy bear with a built-in LLM. It was supposed to chat with kids, and perhaps read them bedtime stories. Instead, it started instructing them on BDSM and other raunchy topics, as well as how to pop pills and where to find knives. The bear was quickly pulled from the market. But the lesson is clear: Unlike traditional computer code, LLMs are nondeterministicyou cant predict their outputs from the inputs you feed them. In 2026 and beyond, this will mean cars that avoid accidents better than human drivers, robots that can easily learn work theyve never done before, and AI embedded in physical systems (like power and utility grids) that can instantly respond to damage or outages. But it will also mean lots of failuresand perhaps a few catastrophic ones. LLMs unpredictability is their power. But as AI gets physical, that unpredictability will also lead to a faster, less tractable, more chaotic world.
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||