|
|||||
Every morning, people fasten their watch, slip on a bracelet, and head out the door without thinking much about what they might encounter along the way. The air they breathe, the dust on their hands, and the surfaces they touch all feel ordinary. Yet many chemical exposures happen quietly, without smell, taste, or warning. What if something as simple as a silicone band around your wrist could help track those invisible exposures? Environmental monitoring has traditionally relied on snapshots of exposure from a water sample collected on a single day, a blood sample drawn at one point in time, or soil tested from a specific location. But exposure unfolds gradually as people move through different environments and come into contact with air, dust, and surfaces throughout the day. New noninvasive monitoring tools aim to capture that longer-term picture. As synthetic chemicals such as forever chemicals, known as perfluoroalkyl and polyfluoroalkyl substances (PFAS), become more widespread in everyday environments, scientists are increasingly focused on understanding how exposure to these substances occurs in daily life. PFAS are called forever chemicals because they take a very long time to degrade in the environment. Traditional monitoring misses everyday reality Traditional monitoring methods are essential for identifying contamination, but they capture exposure as a moment rather than something that unfolds over time. In studies involving people, measuring exposure often requires invasive procedures such as blood draws, which can be expensive, logistically challenging, and, for some participants, uncomfortable enough to discourage involvement. Early in my environmental chemistry research, I noticed something that didnt quite add up. People living in the same agricultural community, or animals sharing the same landscape, often showed very different chemical profiles even when environmental measurements looked similar. The surroundings hadnt changed much; daily behavior had. Movement through different spaces, time spent indoors or outdoors, contact with treated surfaces, and interactions with consumer products all shape exposure in ways a single sample cant fully capture. That realization raised a larger question: If exposure unfolds gradually, how can scientists measure it using tools designed for specific moments? Answering that question requires a shift away from isolated measurements and toward approaches that reflect lived experience. What noninvasive tools change That question led me to work with passive, noninvasive monitoring tools, including silicone wristbands. Rather than actively collecting samples, these tools absorb chemicals from the surrounding environment over time, similar to how skin or fur interacts with air, dust, and surfaces. Silicone wristbands work because they are made of a silicone polymer called polydimethylsiloxane, or PDMS, that can absorb many organic chemicals from the surrounding environment. As the band is worn, compounds from air, dust, and surfaces gradually diffuse into the silicone over time. The material acts somewhat like a sponge, passively collecting traces of chemicals the wearer encounters during daily activities. After the wristband is worn for several days or weeks, researchers can extract those compounds in the laboratory and analyze them to better understand patterns of exposure. Silicone wristbands are one example of a broader group of passive, noninvasive monitoring tools designed to observe how chemicals accumulate over time. Other approaches, including passive air samplers placed in homes or small wearable devices, follow similar principles by absorbing compounds from the surrounding environment. Researchers have used noninvasive tools in community studies to track exposure without medical procedures, lowering barriers to participation and reducing the burden on volunteers. For example, scientists have applied these approaches to study exposure among adolescent girls in agricultural communities, firefighters, and occupants in office buildings. Researchers have also adapted similar ideas for animal and wildlife studies. Instead of drawing blood, scientists may use wearable tags, collars, or passive samplers placed in an animals environment, such as nesting areas or habitats, to understand how chemicals accumulate over time. These approaches can offer insight into exposure across different ecosystems while minimizing stress on animals. Like any method, passive monitoring has limitations. Some chemicals are more difficult to capture than others, and environmental conditions such as temperature, sunlight, or airflow can influence how efficiently samplers absorb pollutants. Wearable devices also reflect exposure over a specific period, meaning they cannot provide a complete lifetime record. These approaches do not replace traditional monitoring. Instead, they add context, showing how exposure accumulates across time and space rather than appearing suddenly at a single sampling point. Why this matters now In the United States, PFAS contamination has become a growing public concern, from drinking water advisories to product restrictions and cleanup efforts. Federal agencies, including the Environmental Protection Agency, have highlighted the persistence of these chemicals and their widespread presence in the environment. Much of the public conversation focuses on where PFAS are found in water systems, soils, or consumer products. Understanding exposure, however, also requires attention to ow people and ecosystems encounter these chemicals in everyday settings. Noninvasive monitoring tools may help fill that gap. They offer ways to better understand cumulative exposure, identify overlooked pathways, and inform environmental health and conservation decisions. For wildlife, these methods may allow researchers to detect emerging risks earlier without adding pressure to species already facing habitat loss and climate stress. Although these approaches are becoming more common in environmental health research, they are still emerging compared with traditional sampling methods. Costs, the need for standardized protocols, and differences in how various chemicals interact with passive materials can slow wider adoption. As researchers continue refining these tools, they can complement rather than replace established monitoring strategies. Yaw Edu Essandoh is a PhD student in public and environmental affairs at Indiana University. This article is republished from The Conversation under a Creative Commons license. Read the original article.
Category:
E-Commerce
Earlier this week, social media was wowed by images from the streets of Chinese cities showing senior citizens lining up to have OpenClaw, the always-on AI assistant, installed on their laptops, desktops, and other devices. Areas like Shenzhen and Wuxi offered subsidies to try to scale up adoption of the tool and capitalize on its capabilities. An enormous proportion of all OpenClaw instances installed worldwide, as tracked by public dashboards, emanate from China. China is adopting tech at an absolute breakneck pace. A ridiculous amount of people turned up into a public event in Shenzhen today to install the OpenClaw.Some devs who work at Chinese big tech companies threw a free public event right outside the Tencent Building in pic.twitter.com/2t4y2ancyz— Rohan Paul (@rohanpaul_ai) March 8, 2026 But just as quickly as China adopted OpenClaw, it now appears to be shunning it. The countrys internet emergency response center has issued an official warning about the risks the technology poses. The central government has sent out diktats to government agencies and state-owned enterprises, warning them against installing OpenClaw on their systems. The private sector has also responded. The same pop-up providers of installation services are now offering to uninstall unwanted OpenClaw instances for a fee. Its almost a notice from the Department of Stating the Bleeding Obvious, says Alan Woodward, a cybersecurity professor at the University of Surrey in England. Everyone has been saying ‘dont be so silly as to give agentic AI access to any valuable data. Yet Woodward points out that Chinas response is more than thatthey appear to recognize that AI adoption has been so rapid that it presents a prime target for supply chain attacks. Attackers were bound to produce malicious add-ons and plug-ins, he says. China cant seem to make up its mind about what to make of OpenClaw, says Ryan Fedasiuk, a fellow at the American Enterprise Institute covering China and its tech development. Beijing is simultaneously banning OpenClaw on government networks while local governments in Shenzhen and Wuxi are subsidizing companies that build on top of it, he says. That points to a dual focus, Fedasiuk reckons. The Chinese government aims to capture the economic upside of agentic AI while keeping it out of the party-state’s own bloodstream, Fedasiuk says. However, how long that balance can hold is debatable, not least because of the way every private-sector actor is trying to adopt agentic AI, he adds. Banning agents in 2026 is like trying to ban spreadsheets in 1985, or Google Sheets in 2013, he says. The productivity gains are enormous, and the opportunity cost of abstaining from the use of agents will eventually become untenable. Still, Fedasiuk points out that Chinas OpenClaw ban seems eminently sensible. Governments should be alarmed by the cybersecurity implications of AI agents, he says. Social norms around the technology are progressing such that many hackers will soon no longer need to crack the encryption that guards valuable files or digital services, but merely gaslight a piece of software that has already been given access to them. The problem is that its out of step with current thinking about AI. Nevertheless, it appears that China has decided that widespread use of OpenClaw could cause safety headaches in the months to come. Prompt injections and plug-in poisoning are still the thorn in a chatbots side, and it isnt surprising China is flagging it, when you consider that every layer of the AI stack has a commercial incentive to push the tools far and wide, says Jake Moore, a cybersecurity expert at ESET. There are also the same structural risks with agentic AI tools that are granted high-level system permissions before anyone has properly stress-tested what an attacker can do with them. Moore says the on-and-off relationship with OpenClaw reflects how different the pace of development is between the bleeding edge of artificial intelligence and those trying to roll it out responsibly. AI is clearly built to be fast and invasive, but it is outpacing security standards and reviews, he explains. For Fedasiuk, that dysfunction between the speed of development and the speed of security patching is evident in how Chinas Central Cyberspace Affairs Commission announced its change in policy. [It] has watched agents proliferate across government networks and moved to restrict their use within days or weeks, he says. Usually the commission would study the issue as a policy problem, issue a white paper or road map, and then come to a conclusion on which it acted. The fact that it didnt suggests preexisting anxiety within the CCP [Chinese Communist Party] about what autonomous AI means for information securityand possibly a more sophisticated understanding of where the technology is headed than many Western observers give them credit for, Fedasiuk says.
Category:
E-Commerce
I think the strongest indicator of how normal using AI has become is the language we use as shorthand for it. It’s now extremely common for someone to say they asked “chat” for some piece of information. We all know what they mean. But if you needed data on how popular AI portals are now, OpenAI provided it recently when the company revealed that ChatGPT has 900 million users, up from 800 million in the fall. Even if Gemini, Copilot, and Claude weren’t also rising (they are), that would be enough for the medianot to mention brands and marketing/PR agenciesto really understand how fast AI is growing as a discovery channel. Whether or not it’s a source of traffic doesn’t matter; it’s a meaningful layer between publishers and audiences. That’s obviously the reason there’s been so much interest in the infant field of GEO (generative engine optimization) lately, and why I’ve written about it more than once in the past few months. But the focus on how to get AI search engines to notice and reference content doesn’t mean there shouldn’t be some kind of reckoning with how the content got there in the first place, and whatif anyvalue exchange that should trigger. {"blockType":"mv-promo-block","data":{"imageDesktopUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/media-copilot.png","imageMobileUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/fe289316-bc4f-44ef-96bf-148b3d8578c1_1440x1440.png","eyebrow":"","headline":"\u003Cstrong\u003ESubscribe to The Media Copilot\u003C\/strong\u003E","dek":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for The Media Copilot. To learn more visit \u003Ca href=\u0022https:\/\/mediacopilot.substack.com\/\u0022\u003Emediacopilot.substack.com\u003C\/a\u003E","subhed":"","description":"","ctaText":"SIGN UP","ctaUrl":"https:\/\/mediacopilot.substack.com\/","theme":{"bg":"#f5f5f5","text":"#000000","eyebrow":"#9aa2aa","subhed":"#ffffff","buttonBg":"#000000","buttonHoverBg":"#3b3f46","buttonText":"#ffffff"},"imageDesktopId":91453847,"imageMobileId":91453848,"shareable":false,"slug":""}} Surveys, such as this one done by OnMessage last fall, consistently show the public believes content providers should be compensated when their content is scraped by AI engines. The AI industry tends to have a different view, often suggesting that “publicly available” data (i.e., stuff on the internet) is fair game. It’s more nuanced than that, of course, but the central issue is one of leverage: The AI companies have it, and publishers by and large don’t. The push for a better bargain A new industry coalition is looking to rebalance those scales. In late February, a group of U.K. media companiesincluding the BBC, the Financial Times, and The Guardianannounced they were forming SPUR, which stands for Standards for Publisher Usage Rights. In an open letter, the leaders of those companies articulated the group’s purpose: “to establish shared technical standards and responsible licensing frameworks that ensure AI developers can access high quality, reliable journalism in legitimate, responsible and convenient ways.” In other words, SPUR is meant to help lead the publishing industry toward a better bargain between AI companies and the media. Currently, publishers have a hodgepodge of solutions: You could pursue a licensing deal with one of the big AI companies, an option available only to publishers above a certain size. You could sue the AI companies, an expensive proposition. Or you could try to defend your content through a combination of paywalls, bot-blocking protocols, and nascent technologies aimed at getting AI crawlers to pay for access. The spirit of SPUR is that there’s power in numbers. Although it’s beginning with a handful of U.K. publishers, the group is actively working to recruit media worldwide into the coalition. By taking collective action, which the news media is traditionally allergic to, the coalition stands a better chance of establishing some kind of framework for how AI services will pay for access to content. It stands an even better chance with allies. Last year, Cloudflare stepped into this fight, advocating on the side of publishers. And it brought to the battlefield technical clout: A significant portion of internet traffic goes through Cloudflare’s network, so it has an outsize say in what the rules are, and which ones get enforced. As part of its push against unauthorized AI scraping, it introduced Pay Per Crawl, a new way to charge bots for access to content. Couldflare’s solution is actually one of several on the market, and although SPUR doesn’t intend to play favorites, Pay Per Crawl is exactly the kind of technical barrier the group was created to encourage. The fact is, unauthorized AI crawling is rampant. TollBit, which publishes quarterly reports about bot activity, recently highlighted the problem of third parties leveraging virtual, “headless” browsers (essentially bots accessing sites as if they were humans and then scraping them) on an industrial scale to crawl vast amounts of datathe equivalent of a fishing trawler. For the longest time, the only technical weapon digital publishers had was the robots exclusion protocol (robots.txt), but it’s an honor system that can easily be ignored or bypassed. The main focus of SPUR, sources tell me, is to help publishers build more defenses. By making it more difficult and cost-prohibitive for AI crawlers to access content, it will encourage the people who operate them to make deals. Then come the agents The biggest wild card here is agents. AI services access content largely for three purposes: for training data, for search crawling, and in response to user requests. It’s the last category that is proving very contentious and the impetus behind a war of words between Perplexity and Cloudflare last summer. User agents have traditionally been given a pass from blocking since they effectively act as human proxies, not mass-scraping tools. Importantly, though, they don’t behav as humans (for example, they don’t look at ads), so many sites (and especially publishers) believe they should be entitled to block them. Some believe this aspect of AI crawling should be regulated, and certainly it’s part of the ongoing lawsuits between the media and the AI industry. But those approaches drag on; SPUR is acting now. You can picture this quickly leading to an arms race, and when the players were individual publishers versus the AI industry, that’s very asymmetric warfare. But a large, worldwide industry coalition, backed by technical allies like Cloudflare, might actually have a chance to push back. So now the hard work begins of herding the cats of the media industry. And the clock is ticking: User behavior is shifting rapidly, and asking “chat” what’s happening in the world means more agents are replacing human traffic to news websites. SPUR may give publishers a chance to shape that system, but it is taking form with or without them. Once those rules harden, changing them will be much harder. {"blockType":"mv-promo-block","data":{"imageDesktopUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/media-copilot.png","imageMobileUrl":"https:\/\/images.fastcompany.com\/image\/upload\/f_webp,q_auto,c_fit\/wp-cms-2\/2025\/03\/fe289316-bc4f-44ef-96bf-148b3d8578c1_1440x1440.png","eyebrow":"","headline":"\u003Cstrong\u003ESubscribe to The Media Copilot\u003C\/strong\u003E","dek":"Want more about how AI is changing media? Never miss an update from Pete Pachal by signing up for The Media Copilot. To learn more visit \u003Ca href=\u0022https:\/\/mediacopilot.substack.com\/\u0022\u003Emediacopilot.substack.com\u003C\/a\u003E","subhed":"","description":"","ctaText":"SIGN UP","ctaUrl":"https:\/\/mediacopilot.substack.com\/","theme":{"bg":"#f5f5f5","text":"#000000","eyebrow":"#9aa2aa","subhed":"#ffffff","buttonBg":"#000000","buttonHoverBg":"#3b3f46","buttonText":"#ffffff"},"imageDesktopId":91453847,"imageMobileId":91453848,"shareable":false,"slug":""}}
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||