Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2025-07-08 23:00:00| Fast Company

Today, most AI is being built on blind faith inside of black boxes. It requires users to have on unquestioning belief in something neither transparent nor understandable.  The industry is moving at warp speed, employing deep learning to tackle every problem, training on datasets that few people can trace, and hoping no one gets sued. The most popular AI models are developed behind closed doors, with unclear documentation, vague licensing, and limited visibility into the provenance of training data. Its a messwe all know itand it’s only going to get messier if we dont take a different approach. This train now, apologize later mindset is unsustainable. It undermines trust, heightens legal risk, and slows meaningful innovation. We dont need more hype. We need systems where ethical design is foundational. The only way we will get there is by adopting the true spirit of open source and making the underlying code, model parameters, and training data available for anyone to use, study, modify, and distribute. Increasing transparency in AI model development will foster innovation and lay a stronger foundation for civic discourse around AI policy and ethics. Open source transparency empowers users Bias is a technical inevitability in the architecture of current large learning models (LLMs). To some extent, the entire process of training is nothing but computing the billions of micro-biases that align with the contents of the training dataset. If we want to align AI with human values, instead of fixating on the red herring of bias, we must have transparency around training. The source datasets, fine-tuning prompts and responses, and evaluation metrics will reveal precisely the values and assumptions of the engineers who create the AI model. Consider a high school English teacher using an AI tool to summarize Shakespeare for literary discussion guides. If the AI developer sanitizes the Bard for modern sensibilities, filtering out language they personally deem inappropriate or controversial, they’re not just tweaking outputthey’re rewriting history. It is impossible to make an AI system tailored for every single user. Attempting to do so has led the recent backlash against ChatGPT for being too sycophantic. Values cannot be unilaterally determined at a low technical level, and certainly not by just a few AI engineers.  Instead, AI developers should provide transparency into their systems so that users, communities, and governments can make informed decisions about how best to align the AI with societal values. Open source will foster AI innovation Research firm Forrester has stated that open source can help firms accelerate AI initiatives, reduce costs, and increase architectural openness, ultimately leading to a more dynamic, inclusive tech ecosystem. AI models consist of more than just software code. In fact, most models’ code is very similar. What uniquely differentiates them are the input datasets and the training regimen. Thus, an intellectually honest application of the concept of “open source” to AI requires disclosure of the training regimen as well as the model source code. The open-source software movement has always been about more than just its tech ingredients. Its about how people come together to form distributed communities of innovation and collective stewardship. The Python programming languagea foundation for modern AIis a great example. Python evolved from a simple scripting language into a rich ecosystem that forms the backbone of modern data processing and AI. It did this through countless contributions from researchers, developers, and innovatorsnot corporate mandates. Open source gives everyone permission to innovate, without installing any single company as gatekeeper. This same spirit of open innovation continues today, with tools like Lumen AI, which democratizes advanced AI capabilities, allowing teams to transform data through natural language without requiring deep technical expertise. The AI systems we’re building are too consequential to stay hidden behind closed doors and too complex to govern without collaboration. However, we will need more than open code if we want AI to be trustworthy. We need open dialogue among the enterprises, maintainers, and communities these tools serve because transparency without ongoing conversation risks becoming mere performance. Real trust emerges when those building the technology actively engage with those deploying it and those whose lives it affects, creating feedback loops that ensure AI systems remain aligned with evolving human values and societal needs. Open source AI is inevitable and necessary for trust Previous technology revolutions like personal computers and the Internet started with a few proprietary vendors but ultimately succeeded based on open protocols and massively democratized innovation. This benefited both users and for-profit corporations, although the latter often fought to keep things proprietary for as long as possible. Corporations even tried to give away closed technologies “for free,” under the mistaken impression that cost is the primary driver of open source adoption. A similar dynamic is happening today. There are many free AI models available, but users are left to wrestle with questions of ethics and alignment around these black-boxed, opaque models. For societies to trust AI technology, transparency is not optional. These powerful systems are too consequential to stay hidden behind closed doors, and the innovation space around them will ultimately prove too complex to be governed by a few centralized actors. If proprietary companies insist on opacity, then it falls upon the open source community to create the alternative. AI technology can and will follow the same commoditization trajectory as previous technologies.  Despite all the hyperbolic press about artificial general intelligence, there is a simple, profound truth about LLMs: The algorithm to turn a digitized corpus can be turned into a thought-machine is straightforward, and freely available. Anyone can do this, given compute time. There are very few secrets in AI today. Open communities of innovation can be built around the foundational elements of modern AI: the source code, the computing infrastructure, and, most importantly, the data. It falls upon us, as practitioners, to insist on open approaches to AI, and to not be distracted by merely “free” facsimiles. Peter Wang is chief AI and innovation officer at Anaconda.


Category: E-Commerce

 

LATEST NEWS

2025-07-08 22:30:00| Fast Company

Creativitys value to business success cant be overstated. Not only do 70% of employers say that creative thinking is the most in-demand skill, but studies show that companies prioritizing design outperform those that dont by two to one. And as the rise of AI, social media, and creators continues to quickly transform both business and culture, it will likely be the creative industryand those working within itthat will help others navigate that change. Things are evolving quickly and creativity is essential to that evolution.   So then, why is it that creative educationthe backbone of creativity is largely standing still while others are embracing change? For decades, the current creative education landscape in the U.S. is largely private, expensive, and increasingly out of sync with the industrys real needs. Most accredited creative programs follow a similar structure: multi-year degrees with high tuition costs, studio-based courses, and portfolio development as the primary measure of progress. While these programs can offer technical training and creative rigor, they often produce similar outcomes: predictable ideas in an industry that thrives on surprise. Also, creative tools and thinking are changing every day, necessitating constant learning not facilitated by current models. Of course, creativity can thrive outside of formal education. Especially now, creative tools are increasingly accessible, and thats a good thing. Also, platforms like TikTok, Canva, and AI-driven products have lowered the barrier to entry, and todays creators are proving that you dont need a degree to have a voice, or an audience. But access to creative tools isnt the same as understanding how to use them wellor how to achieve a level of craft in execution that not only produces results but is worthy of being celebrated. Structured education still matters. So, instead of abandoning creative education altogether, the answer may be in forcing it to evolveembracing new models that acknowledge the real-world needs of business and culture. When education fails, everybody loses In my work with creative organization and educational nonprofit D&AD, I’ve seen the lack of innovations impact in creative education in the U.S., especially as expensive tuitions and employers reliance on traditional talent pipelines leads to creative homogeneity:  Business growth suffers when companies pull from the same narrow talent pools. Diverse perspectives drive cultural relevance and resonance. And for Gen Z in particulardemanding cultural alignment from the brands they supportthe cost of getting it wrong is higher than ever. Diversity of thought suffers when teams are filled with people whove had the same training, same references, and same industry touchpoints. Surprising ideas dont arise from predictable inputs. Across marketing, branding, and beyond, were seeing the effectsideas that feel increasingly familiarcreated by teams that look increasingly alike. Entry-level, mid-level and even leadership demographics stagnate because if the pipeline into the creative industry is closed, the pipeline up stays closed too. Studies show that this kind of lack of leadership diversity hurts business innovation as well. The current U.S. traditional creative educational model only perpetuates these issues, excluding not only the same groups often left out of higher educationlow-income students, first-generation college students, Black, brown, Indigenous, neurodivergent, and rural creativesbut also a growing cohort of social media creators as well as creatives with raw talent who never had access to training, mentorship, or even the vocabulary to describe what theyre good at. Strengthen creative education Fortunately, theres been a major shift in how alternative creative programs are viewed, not just by talent, but by the industry itself. What once felt like a plan B is now seen as a fast, relevant, and often more inclusive way to surface new voices and ideas. Alternative programs dont need to be a threat to traditional creative education. In fact, organizations like ours can provide insights into how to evolve to provide what talent needs. Commit to low cost or even free: More urgently, access remains a major barrier. If you cant afford tuition, unpaid internships, or the time it takes to build a portfolio, youre often locked outnot because you lack talent, but because you lack the means to invest or even awareness that this path even exists. Thats why, at D&AD, our night school Shift is fully-funded with no cost to the student, while still delivering a 74% industry placement rate. Other educational institutions need to follow suit, offering ways to dramatically decrease the financial barrier to entry. With the right access, the right talent will show up. Stress real world skills: Most creative programs do a good job teaching skills, but rarely offer the context students need to thrive in the real world. Students learn how to ideate, design, and critique. But they are often not educated on the other important aspects essential to success: understanding the pace, context and nuance, mastering the soft skills essential to conversation and collaboration, as well as how to function productively as part of a team. Weve found that by taking on live briefs from brands like Spotify, Adidas, Diageo, and Airbnbpresenting ideas, fielding feedback, and navigating ambiguity in real timeweve been able to nurture creatives to hit the ground running in a workplace. Nurture a learning mindset: The only constant is change, so it’s critical to embrace an approach that prioritizes discovery and experimentation. The simple fact is that you cant expect relevant creative work from teams running on outdated approaches. Iterative training isnt just about tools. Its about staying connected to cultural shifts, industry changes, evolving platformsand most importantly, changing audience expectations. The most impactful creative work comes from teams that are learning continuously, not just about craft, but about context. More traditional creative colleges and schools need to build iterative offerings that reflect this reality. The truth is that the best creative education doesnt just teach craft; it nurtures curiosity, builds confidence, provides context, and fosters community. These arent immutable qualities but ones that evolve and change, especially now that social and technological factors have radically altered the creative industry and the businesses relying on it. We have to invest in creative people, not just the creativity. And it starts by giving creatives the right education. p>Kwame Taylor-Hayford is the cofounder of Kin and president of D&AD.


Category: E-Commerce

 

2025-07-08 22:30:00| Fast Company

In a prescient tweet, OpenAI CEO Sam Altman noted that AI will become persuasive long before it becomes intelligent. A scintillating study conducted by researchers at the University of Zurich just proved him right. In the study, researchers used AI to challenge Redditors perspectives in the sites /changemyview subreddit, where users share an opinion on a topic and challenge others to present counter arguments in a civilized manner. Unbeknownst to users, researchers used AI to produce arguments on everything from dangerous dog breeds to the housing crisis. The AI-generated comments proved extremely effective at changing Redditors minds. The universitys ethics committee frowned upon the study, as its generally unethical to subject people to experimentation without their knowledge. Reddits legal team seems to be pursuing legal action against the university. Unfortunately, the Zurich researchers decided not to publish their full findings, but what we do know about the study points to glaring dangers in the online ecosystemmanipulation, misinformation, and a degradation of human connection. The power of persuasion The internet has become a weapon of mass deception. In the AI era, this persuasion power becomes even more drastic. AI avatars resembling financial advisors, therapists, girlfriends, and spiritual mentors can become a channel for ideological manipulation. The University of Zurich study underscores this risk. If manipulation is unacceptable when researchers do it, why is it okay for tech giants to do it? Large language models (LLMs) are the latest products of algorithmically driven content. Algorithmically curated social media and streaming platforms have already proven manipulative. Facebook experimented with manipulating users moodswithout their consent through their newsfeeds as early as 2012. The Rabbit Hole podcast shows how YouTubes algorithm created a pipeline for radicalizing young men. Cambridge Analytica and Russiagate showed how social media influences elections at home and abroad. TikToks algorithm has been shown to create harmful echo chambers that produce division. Foundational LLMs like Claude and ChatGPT are like a big internet hive mind. The premise of these models holds that they know more than you. Their inhumanness makes users assume their outputs are unbiased. Algorithmic creation of content is even more dangerous than algorithmic curation of content via the feed. This content speaks directly to you, coddles you, champions and reinforcing your viewpoint. Look no farther than Grok, the LLM produced by Elon Musks company xAI. From the beginning, Musk was blatant about engineering Grok to support his worldview. Earlier this year, Grok fell under scrutiny for doubting the number of Jews killed in the holocaust and for promoting the falsehood of white genocide in South Africa. Human vs. machine Reddit users felt hostile toward the study because the AI responses were presented as human responses. It’s an intrusion. The subreddit’s rules protect and incentivize real human discussion, dictating that the view in question must be yours and that AI-generated posts must be disclosed. Reddit is a microcosm of what the internet used to be: a constellation of niche interests and communities largely governing themselves, encouraging exploration. Through this digital meandering, a whole generation found likeminded cohorts and evolved with the help of those relationships. Since the early 2010s, bots have taken over the internet. On social media, they are deployed en masse to manipulate public perception. For example, a group of bots in 2016 posed as Black Trump supporters, ostensibly to normalize Trumpism for minority voters. Bots played a pivotal role in Brexit, for another. I believe it matters deeply that online interaction remains human and genuine. If covert, AI-powered content is unethical in research, its proliferation within social media platforms should send up a red flag, too. The thirst for authenticity The third ethical offense of the Zurich study: it’s inauthentic. The researchers using AI to advocate a viewpoint did not hold that viewpoint themselves. Why does this matter? Because the point of the internet is not to argue with robots all day. If bots are arguing with bots over the merits of DEI, if students are using AI to write and teachers are using AI to grade then, seriously, what are we doing? I worry about the near-term consequences of outsourcing our thinking to LLMs. For now, the experience of most working adults lies in a pre-AI world, allowing us to employ AI jdiciously (mostly, for now). But what happens when the workforce is full of adults who have never known anything but AI and who never had an unassisted thought? LLMs cant rival the human mind in creativity, problem-solving, feeling, and ingenuity. LLMs are an echo of us. What do we become if we lose our original voice to cacophony? The Zurich study treads on this holy human space. That’s what makes it so distasteful, and, by extension, so impactful. The bottom line The reasons this study is scandalous are the same reasons its worthwhile. It highlights whats already wrong with a bot-infested internet, and how much more wrong it could get with AI. Its trespasses bring the degradation of the online ecosystem into stark relief. This degradation has been happening for over a decadeyet incrementally, so that we haven’t felt it. A predatory, manipulative internet is a foregone conclusion. It’s the water we’re swimming in, folks. This study shows how murky the water’s become, and how much worse it might get. I hope it will fuel meaningful legislation or at least a thoughtful, broad-based personal opting out. In the absence of rules against AI bots, Big Tech is happy to cash in on their largess. Lindsey Witmer Collins is CEO of WLCM App Studio and Scribbly Books.


Category: E-Commerce

 

Latest from this category

09.07Republicans want to privatize weather forecasts. Do Trump appointees stand to benefit?
09.07Canceling subscriptions was about to get easier, but a federal court blocked the FTC rule
09.07Samsung fixed everything you hated about foldable phonesexcept the price
09.07Reebok plots a comeback with a shimmering Angel Reese sneaker
09.07These personality types are most likely to cheat using AI
09.07Mattels type 1 diabetes Barbie doll wears an insulin pump and glucose monitor. Heres her origin story
09.07Ritz recall 2025: Mislabeled peanut butter cracker sandwiches pose risk of life-threatening allergic reaction
09.07PBS chief Paula Kerger warns public broadcasting could collapse in small communities if Congress strips federal funding
E-Commerce »

All news

09.07Millions of homeowners to see mortgage payments rise
09.07Linda Yaccarino departs as boss of Musk's X
09.07EU hopes to agree US tariff deal 'in coming days'
09.07Republicans want to privatize weather forecasts. Do Trump appointees stand to benefit?
09.07Starmer refuses to rule out freeze on tax thresholds
09.07Canceling subscriptions was about to get easier, but a federal court blocked the FTC rule
09.07Samsung fixed everything you hated about foldable phonesexcept the price
09.07Reebok plots a comeback with a shimmering Angel Reese sneaker
More »
Privacy policy . Copyright . Contact form .