Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 

Keywords

2026-02-19 15:14:22| Fast Company

AI is helping teams build software and tools faster than everbut that doesn’t mean we’re building smarter. I’ve seen entire prototypes spin up in a day, thanks to AI coding assistants. But when you ask how they were built, or whether they’re secure, you get a lot of blank stares. That’s the gap emerging now, between what’s possible with AI, and what’s actually ready to scale. What looks like progress can quickly become a liability. Especially when no one’s quite sure how the thing was built in the first place. Before you go all-in on AI-assisted coding, check these five fault lines: 1. You can’t govern what you can’t see. Perhaps the most overlooked risk of AI-assisted coding isn’t technical, its operational. In the rush to deploy AI tools, many companies have unintentionally created a layer of “shadow engineering.” Developers use these tools without official policies or visibility, leaving leaders in the dark about what’s being built and how. As Mark Curphey, cofounder of Crash Override, told me: “AI is accelerating everything. But without insight into what’s being built, by whom, or where it’s going, you’re scaling chaos with no controls.” Thats why visibility cant be an afterthought; its what makes both governance and acceleration possible. Platforms like Crash Override are designed to surface how AI is being used across the engineering org, offering a real-time view into whats being generated, where its going, and whether its introducing risk or value. And that visibility doesnt exist in isolation. Tools like Jellyfish help connect development work to business goals, while Codacy monitors code quality. But none of these tools can do their job well if you dont know whats happening under the hood. Visibility isnt about surveillance, it’s about building on a solid foundation. 2. Productivity is up. So is your risk exposure. A 2025 study Apiiro, an application security firm, found that AI-assisted developers are shipping 3 to 4 times more code with GenAI tools. But they’re also generating 10 times more security risks. These weren’t just syntax errors. The increase included hidden access risks, insecure code patterns, exposed credentials, and deep architectural flawsissues far more complex and costly to resolve over time. 3. AI-generated code is a potential legal risk. Because AI coding tools are trained on vast libraries of public code, they can generate snippets governed by restrictive open-source licenses. That raises important compliance questions, especially with licenses like GPL or AGPL, which could, in theory,  require companies to open-source any software built on top of that output. But its worth clarifying: No company has been sued (yet) for using AI-generated code. The lawsuits weve seen (like the GitHub Copilot class action) have targeted the AI toolmakers, not the teams using their output. And the majority of GitHubs claims were ultimately thrown out. Still, this is a fast-evolving area with real implications. Auditboards 2025 study found that 82% of enterprise organizations were already deploying AI tools, but only 25% report having any sort of official governance in place. That disconnect may not be a courtroom issue today, but its a visibility and audit issue that leaders cant afford to ignore. 4. Speed is great, until only one person knows how it works. The “bus factor” has long described a worst-case scenario: What happens if the one person who knows how your software works suddenly disappears? “Powered by AI, an average developer becomes 100 times more productive. A superstar becomes 1,000 times,” Curphey noted. “Now imagine two of them are pushing all of that code into production. If they disappear, the company’s in serious trouble.” But the goal isnt zero riskits coverage. Just like test cases help ensure software is resilient, teams need to ensure knowledge and ownership are distributed. That includes understanding whos building what, where the AI is involved, and how those systems will be maintained over time. Ironically, GenAI can help with this. It can surface patterns, identify gaps, and map ownership in ways traditional tooling cant. More than just a productivity boost, it can be a tool for reducing fragility across your team and your codebase. 5. It’s easy to end up with “software slop.” Good, scalable AI-assisted code starts with the prompt. AI will generate exactly what you ask for. But if you don’t fully understand the technical constraints, or the risks you’re overlooking, it might give you code that looks good but has critical flaws in security or performance under the hood. You certainly don’t have to be a developer to use these tools well. But you do need to know what you don’t know, and how to account for it. As Curphey notes in a company blog post, If you wouldnt accept that level of vagueness from a junior engineer, why would you accept it from yourself when prompting? Otherwise, you’re moving fast and creating a kind of digital brain rot: systems that degrade over time because no one really understands how they were built. FROM VIBE CHECK TO REALITY CHECK The takeaway: AI may accelerate output, but it also accelerates risk. Without rigorous review and governance, you may be shipping code that functions, but isn’t structurally sound. So while AI is changing how software gets built, we need to be sure we’re building on a solid foundation. It’s no longer enough to move fast or ship often. As leaders, we need to understand how AI is being used inside our teams, and whether the things getting built are actually stable, scalable, and secure. Because if you don’t know what your team is using AI to build today, you may not like what you’re shipping tomorrow. Lisa Larson-Kelley is founder and CEO of Quantious.


Category: E-Commerce

 

2026-02-19 15:13:27| Fast Company

Indian Prime Minister Narendra Modi on Thursday pitched India as a central player in the global artificial intelligence ecosystem, saying the country aims to build technology at home while deploying it worldwide.“Design and develop in India. Deliver to the world. Deliver to humanity,” Modi told a gathering of some world leaders, technology executives and policymakers at the India AI Impact Summit in New Delhi.Modi’s remarks came as India one of the fastest-growing digital markets seeks to leverage its experience in building large-scale digital public infrastructure and to present itself as a cost-effective hub for AI innovation.The summit was also addressed by French President Emmanuel Macron, Google CEO Sundar Pichai and U.N. Secretary-General António Guterres, who called for a $3 billion fund to help poorer countries build basic AI capacity, including skills, data access and affordable computing power.“The future of AI cannot be decided by a handful of countries, or left to the whims of a few billionaires,” Guterres said, stressing that AI must “belong to everyone.” India aims to ramp up its AI scale India is using the summit to position itself as a bridge between advanced economies and the Global South. Indian officials cite the country’s digital ID and online payments systems as a model for deploying AI at low cost, particularly in developing countries.“We must democratize AI. It must become a tool for inclusion and empowerment, particularly for the Global South,” Modi said.With nearly 1 billion internet users, India has become a key market for global technology companies expanding their AI businesses.Last December, Microsoft announced a $17.5 billion investment over four years to expand cloud and AI infrastructure in India. It followed Google’s $15 billion investment over five years, including plans for its first AI hub in the country. Amazon has also pledged $35 billion by 2030, targeting AI-driven digitization.India is also seeking up to $200 billion in data center investment in the coming years.The country, however, lags in developing its own large-scale AI model like U.S.-based OpenAI or China’s DeepSeek, highlighting challenges such as limited access to advanced semiconductor chips, data centers and hundreds of local languages to learn from. The summit has faced troubles The summit opened Monday with organizational glitches, as attendees and exhibitors reported long lines and delays, and some complained on social media that personal belongings and display items had been stolen. Organizers later said the items were recovered.Problems resurfaced Wednesday when a private Indian university was expelled from the summit after a staff member showcased a commercially available Chinese-made robotic dog while claiming it as the institution’s own innovation.The setbacks continued Thursday when Microsoft co-founder Bill Gates withdrew from a scheduled keynote address. No reason was given, though the Gates Foundation said the move was intended “to ensure the focus remains on the AI Summit’s key priorities.”Gates is facing questions over his ties to late sex offender Jeffrey Epstein. Associated Press


Category: E-Commerce

 

2026-02-19 15:00:00| Fast Company

The social media trial brought by a 20-year-old Californian plaintiff known as Kaley or KGM, putting Meta and YouTube in front of a jury, has captured the worlds attention. The bellwether trial is a test case for the liability of social media platforms and how much they could be on the hook financially if found to have caused harm to their users. KGM, for her part, alleges that she faced anxiety, depression, and body image issues after using Instagram. The proceedings could establish the first real legal boundaries for what has been up to now largely unregulated algorithmic design, determining whether amplifying harmful content amounts to negligence. A verdict against Meta or YouTube in this bellwether case could open the door to other suits, and finally force disclosure of internal research that has so far remained confidential. The first day that Mark Zuckerberg, Metas CEO, was on the stand on February 18 was a major momentnot necessarily for what Zuckerberg said, but for the fact the case has gotten this far. This is a significant moment in terms of these platforms finally being seen to be held to account by their own users, says Steven Buckley, lecturer in media digital and sociology at City St Georges, University of London. While Zuckerberg withstood rigorous questioning from Mark Lanier, the lawyer representing Kaley GM, the fact that he was there at all and the case got to trial is a significant happening. As Fast Company has previously reported, 2026 is the year that the world is getting tough on online safety, particularly for kids. And this trial is notable because it managed to sidestep the usual way social networks swerve liability: Claiming Section 230 protections, which have been in place since the mid-1990s and insulate platforms from bearing responsibility for the actions of their users. If jurors agree that product design, rather than user behavior, is the root cause of harm, big techs decades-long legal shield could begin to fracture. That possibility alone has Silicon Valley watching nervously, with billions in potential damages on the line. Prior to the trial beginning, Snap and TikTok settled with the claimant without admission of liability, leaving YouTube and Meta to fight the trial. A Meta spokesperson tells Fast Company the firm strongly disagree with these allegations and are confident the evidence will show our longstanding commitment to supporting young people,” adding that the evidence will show she faced many significant, difficult challenges well before she ever used social media. YouTube spokesperson José Castaneda tells Fast Company: The allegations in these complaints are simply not true. Its not particularly surprising that these large platforms are finally facing some legal repercussions from their actual users, says Buckley. A steady drumbeat of reporting, alongside other smaller legal cases, have revealed information that suggests social media can be harmful to younger users. This case is therefore a potential watershed because the plaintiffs argue that Instagram’s and YouTube’s underlying product designfeatures like the infinite scroll, autoplay, and recommendation algorithms that serve up progressively more engaging contentconstitutes a defective product. But most of those other cases havent received as much attention because theyve not gotten as far as this one hasnor have been as likely to succeed in some way. Zuckerberg did not come across as someone with children’s best interests at heart, says Tama Leaver, professor of internet studies at Curtin University in Australia. Leaver contrasts Zuckerbergs performance in court with Adam Mosseri’s a few days earlier, who the researcher says had the tenacity to argue that the term addiction is being misused. In contrast, Zuckerberg didn’t feel like someone who’d done their homework, but rather someone who was surprised they had to turn up and answer these questions, Leaver explains. If his job was to convince the listening world that he could be a trusted figure in the lives of teens and young people, then he failed. Despite that poor performance by Zuckerberg, and despite the strength of the case in comparison to others that have gone before, some think that a decision against the social media firmsor a general movement to recognize the issues inherent with social mediacould backfire. One concern I have is that people will think that the simple solution to many of the issues raised in these lawsuits is to simply ban under-16s from using the platforms, says Buckley. This is a woefully misguided reaction. The scientific evidence regarding the link between social media use at a young age and addiction is still not well established. Whether the jury agrees with that assessment or not, the trial has already achieved something that years of congressional hearings and regulatory hand-wringing havent: putting the people who designed these systems under oath and making them answer difficult questionsthen be responsible for the consequences of what they say. One of the reasons I think we have gotten to this stage is that some people have come to the conclusion that their governments are not going to do anything meaningful to hold these companies to account and so have felt compelled to take them on themselves, says Buckley. The rest of the tech industry will be watching closely to see what comes next.


Category: E-Commerce

 

2026-02-19 14:37:32| Fast Company

Tariffs paid by midsized U.S. businesses tripled over the course of last year, new research tied to one of America’s leading banks showed on Thursday more evidence that President Donald Trump’s push to charge higher taxes on imports is causing economic disruption.The additional taxes have meant that companies that employ a combined 48 million people in the U.S. the kinds of businesses that Trump had promised to revive have had to find ways to absorb the new expense, by passing it along to customers in the form of higher prices, employing fewer workers or accepting lower profits.“That’s a big change in their cost of doing business,” said Chi Mac, business research director of the JPMorganChase Institute, which published the analysis on Thursday. “We also see some indications that they may be shifting away from transacting with China and maybe toward some other regions in Asia.”The research doesn’t say how the additional costs are flowing through the economy, but it indicates that tariffs are being paid by U.S. firms. It’s part of a growing body of economic analyses that counter the administration’s claims that foreigners pay the tariffs.The JPMorganChase Institute report used payments data to look at businesses that might lack the pricing power of large multinational companies to offset tariffs, but may be small enough to quickly change supply chains to minimize exposure to the tax increases. The companies tended to have revenues between $10 million and $1 billion with fewer than 500 employees, a category known as “middle market.”The analysis suggests that the Trump administration’s goal of becoming less directly reliant on Chinese manufacturers has been occurring. Payments to China by these companies were 20% below their October 2024 levels, but it’s unclear whether that means China is simply routing its goods through other countries or if supply chains have moved.The authors of the analysis emphasized in an interview that companies are still adjusting to the tariffs and said they plan to continue studying the issue.The Trump administration has been adamant that the tariffs are a boon for the economy, businesses, and workers. Kevin Hassett, director of the White House National Economic Council, lashed out on Wednesday at research by the New York Federal Reserve showing that nearly 90% of the burden for Trump’s tariffs fell on U.S. companies and consumers.“The paper is an embarrassment,” Hassett told CNBC. “It’s, I think, the worst paper I’ve ever seen in the history of the Federal Reserve system. The people associated with this paper should presumably be disciplined.”Trump increased the average tariff rate to 13% from 2.6% last year, according to the New York Fed researchers. He declared that tariffs on some items like steel, kitchen cabinets and bathroom vanities were in the national security interest of the country and declared an economic emergency to bypass Congress and impose a baseline tax on goods from much of the world last April at an event he called “Liberation Day.”The high rates provoked a financial market panic, prompting Trump to walk back his rates and then engage in talks with multiple countries that led to a set of new trade frameworks. The Supreme Court is expected to rule soon on whether Trump surpassed his legal authority by declaring an economic emergency.Trump was elected in 2024 on his promise to tame inflation, but his tariffs have contributed to voter frustration over affordability. While inflation has not spiked during Trump’s term thus far, hiring slowed sharply and a team of academic economists estimate that consumer prices were roughly 0.8 percentage points higher than they would otherwise be. Josh Boak, Associated Press


Category: E-Commerce

 

2026-02-19 14:00:00| Fast Company

Can AI help neurodivergent adults connect with each other? That’s the bet of a new social network called Synchrony, which believes AI and a well-designed social network with the right safeguards can reduce social atomization and calm the overwhelming cacophony of socializing online. Launching February 19, the social network debuts during a moment when social media, chatbots, and doomscrolling has made digital communications a hot button topic for parents. No other app for the neurodiverse is focusing primarily on reducing social anxiety and encouraging friendship, says cofounder Jamie Pastrano. I think that’s the biggest piece of it, and no other app is focusing on building an authentic community.  Synchrony also has support from Starry Foundation and Autism Speaks, two large U.S. advocacy groups, and approval from the Apple App Store. I was really blown away about what theyre trying to do, says Bobby Vossoughi, president of the Starry Foundation. These kids are isolated and their social cues are off.  Theyre creating something that could really change this community’s lives for the long term. A parenting challenge without a solution The idea for Synchrony came from Pastrano, a former management consultant and executive sales leader, whose son, Jesse, 21, is autistic. As Jesse experienced teenagerhood, Pastrano became frustrated with the challenges she saw her son facing around the friendship gap; she saw him as a social kid, but planning, timing, even saying the appropriate thing often tripped him up. Unlike other challenges shed faced as a mother of a neurodivergent child, this one didnt seem to have a solution.  Research shows that people with autism or neuro developmental differencesroughly 1 in 5 people according to the Neurodiversity Allianceface increasing loneliness as they transition between adolescence and adulthood. New social responsibilities and expectations for life after school, combined with the loss of support systems that may have been embedded in secondary and university education, can lead to isolation.  One of the cofounders, Brittany Moser, an autism specialist who teaches at Park University in Missouri, says that shes held crying students who, forced to operate in a world thats not built for them, are desperate for social connection. She hopes this network can foster it. Autism doesn’t end at 18, Pastrano says. There was this huge gap in services to support social, emotional, and community needs. Pastrano sold her company in 2024 and devoted herself to solving the issue with what would become Synchrony. Part of Pastranos inspiration came from reality television. The dating show Love on the Spectrum piqued her interest, causing her to think not about romance, but about connection, friendship, and community. She even contacted a coach on the show, who suggested she get certified at the PEERS program at UCLA, which teaches social and dating skills to young adults on the spectrum. [Image: Synchrony] Broadly speaking, Synchrony is built with the same digital infrastructure as a dating site, but is meant for fostering friendships amid a unique population. A big part of the design challenge was making sure it was suitable for the audience, and wasnt too distracting or loud.  Profiles focus much more on interests, Pastrano says, since interests weigh much more heavily as a reason to communicate among this population. Theres also a space to list neurodiversity classifications and communication style and preferences (“I prefer text to phone calls,” or “I take a few days to reply,” etc.) as part of the effort to front-load key details. Simplified menus and colors and no ads help reduce distractions. Pastrano also wants to respect the community and focus on healthy experiences and not push for rapid growth; users pay a monthly fee of $44.99 after a free 30-day trial, allowing the network to avoid advertisements. Part of the registration process includes two-step verificationboth the user and a trusted person, either a teacher, doctor, or parent needs to input personal details and a photo IDto make sure bad actors outside the community arent given access.  Social Coach, or ‘Seductive Cul-de-sac’ Part of Synchronys strategy is the use of Jesse (named after Pastrano’s son), marketed as an AI-powered social support tool that goes far beyond chat assist technology. By providing real-time conversation support, the chatbot aims to overcome social anxiety and a lack of confidence around socialization. Talking with Jesse online, developers claim, will bolster user self-assurance and communication skills, eventually manifesting in real life.  When Synchrony users get stuck in an online conversation, they can tap an icon to summon Jesse, who will provide editable solutions to advance or end an interaction. The AI coach offers three main options: a tool to help express yourself, that will offer solutions to continuing the conversation; a button that can help parse through the conversation to help better understand what happened, and whether something might have been meant as flirty or friendly; and a final option to protect, and offer suggestions to set boundaries and exit a conversation quietly.  Built using the Amazon Bedrock large language model and trained by Synchrony staff, Jesse is scanning conversations constantly to provide social coaching when asked. The use of AI among the neurodivergent population has sparked the same debates as the technologys use among the population at large. Research by a team at Stanford found that an AI chatbot they developed called Noora, designed to improve communication skills, can improve empathy among users with autism. Some members of the community have claimed AI coaches have helped them with relationships and transformed their lives. At the same time, some advocacy groups have warned that chatbots emotional manipulation can be more severe for the neurodiverse, and some researhers are concerned AI might reinforce bad communication habits. British researcher Chris Papadopoulos sums up the state of play in a recent paper, concluding that while the technology holds the potential to democratize companionship left unchecked, AI companions could become a seductive cul-de-sac, capturing autistic people in artificial relationships that stunt their growth or even lead them into harm’s way.  Amid awareness of the sometimes destructive and even deadly consequences of chatbot use, there are significant guardrails built into Jesse, says Moser, including a long list of activities and actions to avoid, like not sharing personal addresses. Jesse is also told not to dispense medical advice. Jesse is not a therapist, and as the founders are clear to note, this isnt a clinical app. If users start asking Jesse about off-topic concepts, Moser says it will be programmed to reply something to the effect of, Hmm, I don’t know if that’s really going to help you connect with the other members. There will also be warnings if someone is spending too much time just talking with Jesse. Synchrony is launching with human moderation to provide extra safeguards. Lynn Koegel, a professor and researcher at Stanford University who has studied autism and technology, says her team has spent time updating and changing their models of Noora, to make sure its not too harsh, such as not reinforcing communication attempts or being too strict around grammar issues. She says its very important to do more in-depth studies and clinical research to make sure these tools do work well and as intended (she has not seen or tested Synchrony). My gut feeling is these tools can be very good support, she says. The jury is out about whether individual programs that havent been tested can be assistive.   As the Synchrony team works out bugs and final design issues before launch, the challenge becomes building a robust enough community to drive more organic growth. Early user testing that started in December, both an alpha test of 14 users, and closed beta tests among university support groups for autistic students, helped them refine the model and layout. The marketing strategy at launch doesnt focus on the users themselves, but rather neurodiverse employer groups, universities that have neurodiverse programs (who can create their own closed-loop, campus versions of the app), advocates, and relevant podcast hosts.  Success is about awareness and attention, says Pastrano. It’s not a numbers game for me. It’s a really personal game. 


Category: E-Commerce

 

Sites : [28] [29] [30] [31] [32] [33] [34] [35] [36] [37] [38] [39] [40] [41] [42] [43] [44] [45] [46] [47] next »

Privacy policy . Copyright . Contact form .