Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2026-02-19 15:14:22| Fast Company

AI is helping teams build software and tools faster than everbut that doesn’t mean we’re building smarter. I’ve seen entire prototypes spin up in a day, thanks to AI coding assistants. But when you ask how they were built, or whether they’re secure, you get a lot of blank stares. That’s the gap emerging now, between what’s possible with AI, and what’s actually ready to scale. What looks like progress can quickly become a liability. Especially when no one’s quite sure how the thing was built in the first place. Before you go all-in on AI-assisted coding, check these five fault lines: 1. You can’t govern what you can’t see. Perhaps the most overlooked risk of AI-assisted coding isn’t technical, its operational. In the rush to deploy AI tools, many companies have unintentionally created a layer of “shadow engineering.” Developers use these tools without official policies or visibility, leaving leaders in the dark about what’s being built and how. As Mark Curphey, cofounder of Crash Override, told me: “AI is accelerating everything. But without insight into what’s being built, by whom, or where it’s going, you’re scaling chaos with no controls.” Thats why visibility cant be an afterthought; its what makes both governance and acceleration possible. Platforms like Crash Override are designed to surface how AI is being used across the engineering org, offering a real-time view into whats being generated, where its going, and whether its introducing risk or value. And that visibility doesnt exist in isolation. Tools like Jellyfish help connect development work to business goals, while Codacy monitors code quality. But none of these tools can do their job well if you dont know whats happening under the hood. Visibility isnt about surveillance, it’s about building on a solid foundation. 2. Productivity is up. So is your risk exposure. A 2025 study Apiiro, an application security firm, found that AI-assisted developers are shipping 3 to 4 times more code with GenAI tools. But they’re also generating 10 times more security risks. These weren’t just syntax errors. The increase included hidden access risks, insecure code patterns, exposed credentials, and deep architectural flawsissues far more complex and costly to resolve over time. 3. AI-generated code is a potential legal risk. Because AI coding tools are trained on vast libraries of public code, they can generate snippets governed by restrictive open-source licenses. That raises important compliance questions, especially with licenses like GPL or AGPL, which could, in theory,  require companies to open-source any software built on top of that output. But its worth clarifying: No company has been sued (yet) for using AI-generated code. The lawsuits weve seen (like the GitHub Copilot class action) have targeted the AI toolmakers, not the teams using their output. And the majority of GitHubs claims were ultimately thrown out. Still, this is a fast-evolving area with real implications. Auditboards 2025 study found that 82% of enterprise organizations were already deploying AI tools, but only 25% report having any sort of official governance in place. That disconnect may not be a courtroom issue today, but its a visibility and audit issue that leaders cant afford to ignore. 4. Speed is great, until only one person knows how it works. The “bus factor” has long described a worst-case scenario: What happens if the one person who knows how your software works suddenly disappears? “Powered by AI, an average developer becomes 100 times more productive. A superstar becomes 1,000 times,” Curphey noted. “Now imagine two of them are pushing all of that code into production. If they disappear, the company’s in serious trouble.” But the goal isnt zero riskits coverage. Just like test cases help ensure software is resilient, teams need to ensure knowledge and ownership are distributed. That includes understanding whos building what, where the AI is involved, and how those systems will be maintained over time. Ironically, GenAI can help with this. It can surface patterns, identify gaps, and map ownership in ways traditional tooling cant. More than just a productivity boost, it can be a tool for reducing fragility across your team and your codebase. 5. It’s easy to end up with “software slop.” Good, scalable AI-assisted code starts with the prompt. AI will generate exactly what you ask for. But if you don’t fully understand the technical constraints, or the risks you’re overlooking, it might give you code that looks good but has critical flaws in security or performance under the hood. You certainly don’t have to be a developer to use these tools well. But you do need to know what you don’t know, and how to account for it. As Curphey notes in a company blog post, If you wouldnt accept that level of vagueness from a junior engineer, why would you accept it from yourself when prompting? Otherwise, you’re moving fast and creating a kind of digital brain rot: systems that degrade over time because no one really understands how they were built. FROM VIBE CHECK TO REALITY CHECK The takeaway: AI may accelerate output, but it also accelerates risk. Without rigorous review and governance, you may be shipping code that functions, but isn’t structurally sound. So while AI is changing how software gets built, we need to be sure we’re building on a solid foundation. It’s no longer enough to move fast or ship often. As leaders, we need to understand how AI is being used inside our teams, and whether the things getting built are actually stable, scalable, and secure. Because if you don’t know what your team is using AI to build today, you may not like what you’re shipping tomorrow. Lisa Larson-Kelley is founder and CEO of Quantious.


Category: E-Commerce

 

LATEST NEWS

2026-02-19 15:13:27| Fast Company

Indian Prime Minister Narendra Modi on Thursday pitched India as a central player in the global artificial intelligence ecosystem, saying the country aims to build technology at home while deploying it worldwide.“Design and develop in India. Deliver to the world. Deliver to humanity,” Modi told a gathering of some world leaders, technology executives and policymakers at the India AI Impact Summit in New Delhi.Modi’s remarks came as India one of the fastest-growing digital markets seeks to leverage its experience in building large-scale digital public infrastructure and to present itself as a cost-effective hub for AI innovation.The summit was also addressed by French President Emmanuel Macron, Google CEO Sundar Pichai and U.N. Secretary-General António Guterres, who called for a $3 billion fund to help poorer countries build basic AI capacity, including skills, data access and affordable computing power.“The future of AI cannot be decided by a handful of countries, or left to the whims of a few billionaires,” Guterres said, stressing that AI must “belong to everyone.” India aims to ramp up its AI scale India is using the summit to position itself as a bridge between advanced economies and the Global South. Indian officials cite the country’s digital ID and online payments systems as a model for deploying AI at low cost, particularly in developing countries.“We must democratize AI. It must become a tool for inclusion and empowerment, particularly for the Global South,” Modi said.With nearly 1 billion internet users, India has become a key market for global technology companies expanding their AI businesses.Last December, Microsoft announced a $17.5 billion investment over four years to expand cloud and AI infrastructure in India. It followed Google’s $15 billion investment over five years, including plans for its first AI hub in the country. Amazon has also pledged $35 billion by 2030, targeting AI-driven digitization.India is also seeking up to $200 billion in data center investment in the coming years.The country, however, lags in developing its own large-scale AI model like U.S.-based OpenAI or China’s DeepSeek, highlighting challenges such as limited access to advanced semiconductor chips, data centers and hundreds of local languages to learn from. The summit has faced troubles The summit opened Monday with organizational glitches, as attendees and exhibitors reported long lines and delays, and some complained on social media that personal belongings and display items had been stolen. Organizers later said the items were recovered.Problems resurfaced Wednesday when a private Indian university was expelled from the summit after a staff member showcased a commercially available Chinese-made robotic dog while claiming it as the institution’s own innovation.The setbacks continued Thursday when Microsoft co-founder Bill Gates withdrew from a scheduled keynote address. No reason was given, though the Gates Foundation said the move was intended “to ensure the focus remains on the AI Summit’s key priorities.”Gates is facing questions over his ties to late sex offender Jeffrey Epstein. Associated Press


Category: E-Commerce

 

2026-02-19 15:00:00| Fast Company

The social media trial brought by a 20-year-old Californian plaintiff known as Kaley or KGM, putting Meta and YouTube in front of a jury, has captured the worlds attention. The bellwether trial is a test case for the liability of social media platforms and how much they could be on the hook financially if found to have caused harm to their users. KGM, for her part, alleges that she faced anxiety, depression, and body image issues after using Instagram. The proceedings could establish the first real legal boundaries for what has been up to now largely unregulated algorithmic design, determining whether amplifying harmful content amounts to negligence. A verdict against Meta or YouTube in this bellwether case could open the door to other suits, and finally force disclosure of internal research that has so far remained confidential. The first day that Mark Zuckerberg, Metas CEO, was on the stand on February 18 was a major momentnot necessarily for what Zuckerberg said, but for the fact the case has gotten this far. This is a significant moment in terms of these platforms finally being seen to be held to account by their own users, says Steven Buckley, lecturer in media digital and sociology at City St Georges, University of London. While Zuckerberg withstood rigorous questioning from Mark Lanier, the lawyer representing Kaley GM, the fact that he was there at all and the case got to trial is a significant happening. As Fast Company has previously reported, 2026 is the year that the world is getting tough on online safety, particularly for kids. And this trial is notable because it managed to sidestep the usual way social networks swerve liability: Claiming Section 230 protections, which have been in place since the mid-1990s and insulate platforms from bearing responsibility for the actions of their users. If jurors agree that product design, rather than user behavior, is the root cause of harm, big techs decades-long legal shield could begin to fracture. That possibility alone has Silicon Valley watching nervously, with billions in potential damages on the line. Prior to the trial beginning, Snap and TikTok settled with the claimant without admission of liability, leaving YouTube and Meta to fight the trial. A Meta spokesperson tells Fast Company the firm strongly disagree with these allegations and are confident the evidence will show our longstanding commitment to supporting young people,” adding that the evidence will show she faced many significant, difficult challenges well before she ever used social media. YouTube spokesperson José Castaneda tells Fast Company: The allegations in these complaints are simply not true. Its not particularly surprising that these large platforms are finally facing some legal repercussions from their actual users, says Buckley. A steady drumbeat of reporting, alongside other smaller legal cases, have revealed information that suggests social media can be harmful to younger users. This case is therefore a potential watershed because the plaintiffs argue that Instagram’s and YouTube’s underlying product designfeatures like the infinite scroll, autoplay, and recommendation algorithms that serve up progressively more engaging contentconstitutes a defective product. But most of those other cases havent received as much attention because theyve not gotten as far as this one hasnor have been as likely to succeed in some way. Zuckerberg did not come across as someone with children’s best interests at heart, says Tama Leaver, professor of internet studies at Curtin University in Australia. Leaver contrasts Zuckerbergs performance in court with Adam Mosseri’s a few days earlier, who the researcher says had the tenacity to argue that the term addiction is being misused. In contrast, Zuckerberg didn’t feel like someone who’d done their homework, but rather someone who was surprised they had to turn up and answer these questions, Leaver explains. If his job was to convince the listening world that he could be a trusted figure in the lives of teens and young people, then he failed. Despite that poor performance by Zuckerberg, and despite the strength of the case in comparison to others that have gone before, some think that a decision against the social media firmsor a general movement to recognize the issues inherent with social mediacould backfire. One concern I have is that people will think that the simple solution to many of the issues raised in these lawsuits is to simply ban under-16s from using the platforms, says Buckley. This is a woefully misguided reaction. The scientific evidence regarding the link between social media use at a young age and addiction is still not well established. Whether the jury agrees with that assessment or not, the trial has already achieved something that years of congressional hearings and regulatory hand-wringing havent: putting the people who designed these systems under oath and making them answer difficult questionsthen be responsible for the consequences of what they say. One of the reasons I think we have gotten to this stage is that some people have come to the conclusion that their governments are not going to do anything meaningful to hold these companies to account and so have felt compelled to take them on themselves, says Buckley. The rest of the tech industry will be watching closely to see what comes next.


Category: E-Commerce

 

Latest from this category

19.02Eileen Gu, most-decorated female freestyle skier in Olympics history, shuts down reporters ridiculous question about her performance
19.02The big red flag working parents look for in a job
19.02Reeses Peanut Butter Cup inventors grandson says the candy has gotten worse in this specific way. Social media agrees
19.02Legos new Monet-inspired set is full of hidden details
19.02AIs biggest problem isnt intelligence. Its implementation
19.02How a proposed tax on California billionaires is dividing Democrats ahead of the midterms
19.02Frida built its brand on dirty jokes for parents. Now the internet isnt laughing
19.02Google cofounder Sergey Brins unretirement is a lesson for the rest of us  
E-Commerce »

All news

19.02Here are my favorite things from Toy Fair 2026
19.02Microsoft error sees confidential emails exposed to AI tool Copilot
19.02Ring could be planning to expand Search Party feature beyond dogs
19.02YouTube is bringing the Gemini-powered 'Ask' button to TVs
19.02Rivian rolls out an Apple Watch app with vehicle controls and digital key support
19.02Eileen Gu, most-decorated female freestyle skier in Olympics history, shuts down reporters ridiculous question about her performance
19.02Average US long-term mortgage rate dips to 6.01%, lowest level in more than 3 years
19.02Orbital AI data centers could work, but they might ruin Earth in the process
More »
Privacy policy . Copyright . Contact form .