Xorte logo

News Markets Groups

USA | Europe | Asia | World| Stocks | Commodities



Add a new RSS channel

 
 


Keywords

2025-02-12 10:00:00| Fast Company

YouTube isnt just a website anymore. And computers and smartphones arent even the primary ways that people watch YouTube content, either. YouTube CEO Neal Mohan, in his annual letter to the YouTube community yesterday, wrote, TV has surpassed mobile and is now the primary device for YouTube viewing in the U.S. At the same time, he said, creators have moved from filming grainy videos of themselves on desktop computers to building studios and producing popular talk shows and feature-length films.  Ahead of posting his letter, Mohan spoke with Fast Company about how YouTubewhich is celebrating its 20th yearis responding to changing viewer habits and giving creators on the platform the tools they need to be the new Hollywood. This interview has been edited for length and clarity.  What do you think is innovative about YouTubes transition to the living room? When people turn on the TV, they’re turning on YouTubeespecially young people. So television really is YouTube these days. That’s an overnight success that’s been many years in the making. We’ve been investing heavily in the viewer and consumer experience of YouTube on living room screens. Then, for creators: Creators really are the new Hollywood. They’re the new forms of entertainment. They’re building creator-led studios. In some ways they’re the new startup economy in Hollywood. They’re hiring people, they’re providing jobs to lots of people, and they really are about this new form of entertainment. Has YouTube fundamentally changed as a result? YouTube really is kind of its own thing. We’re not a social media platform. People don’t go there to connect with their friends. They come to watch their favorite type of content, whether it’s a podcast, a creator, traditional media, live sports. We’re also not a traditional broadcaster. We’re something that’s in its own lane. You come to YouTube if you want to watch a 15-second YouTube Short, a 15-minute long-form video from your favorite creator, or a 15-hour livestream.  One of the big sort of stories this year has been about the growth of podcasters on YouTube, particularly video podcasters. And interestingly, one of the places where we all consume those podcasts is on television screens. There’s over a billion hours of YouTube consumed on television screens globally every single day. The living room is our fastest-growing screen. Of the top creators on YouTube, the number who get the majority of their watch time from the living room has grown 400% year on year. And the number of creators who earn the most revenue through their living room consumption has grown 30% year on year. What have you heard from creators about what they need to succeed on traditional TV sets, and how have you responded? We’ve invested heavily in bringing the interactivity that we all, as viewers, love about YouTube to the living room screen, whether through more prominent abilities to subscribe, so that creators can grow their subscription counts, [or] through linking, so links that creators have in their videos are more seamless through QR codes on screens.  It’s also about having a second-screen experience by linking your phone to the television set. You may have noticed that when you go to a creator’s channel page on YouTube, you get this cinematic experience of their content. [YouTube allows] creators to organize their videos in terms of episodes and seasons. And then finally, AI plays a big role in terms of empowering creators and human creativity. For example, allowing creators to auto-dub their videos in multiple languages seamlessly and automatically when they upload a video, or helping creators solve the blank-screen problem, working with Gemini integrated directly into YouTube Studio so that you can cowrite with Gemini and produce a script for your video. Is it a challenge to move to the living room screen, or is that viewing experience just the same thing, only bigger and further from the keyboard? The living room screen was our fastest-growing screen before the pandemicback all the way to 2019through the pandemic, and obviously since then. So it’s been a very big area of investment for us. We have worked closely with our device manufacturer partnerspeople who make connected TVsto create a world-class living room experience.  What is exciting from my perspective is that I think we’re just getting started. The opportunity before us, especially around the world, is enormous.  What comes next? Our job is to build the world’s best stage. All of our technological investment, all of our product innovation, is about building that stage. One of the fastest-growing sort of sleeper-success products on the living room is actually YouTube Shorts, which you think of as like a mobile-only type of a product. But lots and lots of Shorts consumption viewership happens in the living room. And [were going to continue] to double down our investments in AI to really empower human creativity. So those are two areas that you should expect to see more from us in 2025. You’re still pretty new to this role. How would you describe your personal touch at YouTube? What’s the Neal imprint? I’ve been at YouTube for a very long time, almost a decade. And my relationship with YouTube actually goes back to before either I or YouTube were part of Google, so almost 17, 18 years. As I’ve been in the CEO role now for the last couple of years, I would say my focus is to continue to do a lot of those things that I’ve done: first, really focusing on technology and product innovation. I’m a technologist at heart, but I also love media.


Category: E-Commerce

 

LATEST NEWS

2025-02-12 09:19:00| Fast Company

The AI landscape is rapidly evolving, with America’s $500 billion Stargate Project signaling massive infrastructure investment while China’s DeepSeek emerges as a formidable competitor. DeepSeek’s advanced AI models, rivaling Western capabilities at lower costs, raise significant concerns about potential cybersecurity threats, data mining, and intelligence gathering on a global scale. This development highlights the urgent need for robust AI regulation and security measures in the U.S. As the AI race intensifies, the gap between technological advancement and governance widens. The U.S. faces the critical challenge of not only accelerating its AI capabilities through projects like Stargate but also developing comprehensive regulatory frameworks to protect its digital assets and national security interests. With DeepSeek’s potential to overcome export controls and conduct sophisticated cyber operations, the U.S. must act swiftly to ensure its AI innovations remain secure and competitive in this rapidly changing technological landscape. We have already seen the first wave of AI-powered dangers. Deepfakes, bot accounts, and algorithmic manipulation on social media have all helped undermine social cohesion while contributing to the creation of political echo chambers. But these dangers are childs play compared to the risks that will emerge in the next five to ten years. During the pandemic, we saw the unparalleled speed with which new vaccines could be developed with the help of AI. As Mustafa Suleyman, founder of DeepMind and now CEO of Microsoft AI, has argued, it will not be long before AI can design new bioweapons with equal speed. And these capabilities will not be confined to state actors. Just as modern drone technology has recently democratized access to capabilities that were once the sole province of the military, any individual with even a rudimentary knowledge of coding will soon be able to weaponize AI from their bedroom at home. The fact that U.S. senators were publicly advocating the shooting down of unmanned aircraft systems, despite the lack of any legal basis for doing so, is a clear sign of a systemic failure of control. This failure is even more concerning than the drone sightings themselves. When confidence in the governments ability to handle such unexpected events collapses, the result is fear, confusion, and conspiratorial thought. But there is much worse to come if we fail to find new ways to regulate novel technologies. If you think the systemic breakdown in response to drone sightings is worrying, imagine how things will look when AI starts causing problems. Seven years spent helping the departments of Defense and Homeland Security with innovation and transformation (both organizational and digital) has shaped my thinking about the very real geopolitical risks that AI and digital technologies bring with them. But these dangers do not come only from outside our country. The past decade has seen an increasing tolerance among many U.S. citizens for the idea of political violence, a phenomenon that has been cast into particularly vivid relief in the wake of the shooting of United Healthcare CEO Brian Thompson. As automation replaces increasing numbers of jobs, it is entirely possible that a wave of mass unemployment will lead to severe unrest, multiplying the risk that AI will be used as a weapon to lash out at society at large. These dangers will be on our doorsteps soon. But even more concerning are the unknown unknowns. AI is developing at lightning speed, and even those responsible for that development have no idea exactly where we will end up. Nobel laureate Geoffrey Hinton, the so-called Godfather of AI, has said there is a significant chance that artificial intelligence will wipe out humanity within just 30 years. Others suggest that the time horizon is much narrower. The simple fact that there is so much uncertainty about the direction of travel should concern us all deeply. Anyone who is not at least worried has simply not thought hard enough about the dangers. The regimented regulation has to be risk-based We cannot afford to treat AI regulation in the same haphazard fashion that has been applied to drone technology. We need an adaptable, far-reaching and future-oriented approach to regulation that is designed to protect us from whatever might emerge as we push back the frontiers of machine intelligence. During a recent interview with Senator Richard Blumenthal, I discussed the question of how we can effectively regulate a technology that we do not yet fully understand. Blumenthal is the co-author with Senator Josh Hawley of the Bipartisan Framework for U.S. AI Act, also known as the Blumenthal-Hawley Framework. Blumenthal proposes a relatively light-touch approach, suggesting that the way the U.S. government regulates the pharmaceutical industry can serve as a model for our approach to AI. This approach, he argues, provides for strict licensing and oversight of potentially dangerous emerging technologies without placing undue restrictions on the ability of American companies to remain world leaders in the field. “We don’t want to stifle innovation,” Blumenthal says. “That’s why the regimented regulation has to be risk-based. If it doesn’t pose a risk, we don’t need a regulator.” This approach offers a valuable starting point for discussion, but I believe we need to go further. While a pharmaceutical model may be sufficient for regulating corporate AI development, we also need a framework that will limit the risks posed by individuals. The manufacturing and distribution of pharmaceuticals requires significant infrastructure, but computer code is an entirely different beast that can be replicated endlessly and transmitted anywhere on the planet in a fraction of a second. The possibility of problematic AI being created and leaking out into the wild is simply much higher than is the case for new and dangerous drugs. Given the potential for AI to generate extinction-level outcomes, it is not too far-reaching to say that the regulatory frameworks surrounding nuclear weapons and nuclear energy are more appropriate for this technology than those that apply in the drug industry. The announcement of the Stargate Project adds particular urgency to this discussion. While massive private-sector investment in AI infrastructure is crucial for maintaining American technological leadership, it also accelerates the timeline for developing comprehensive regulatory frameworks. We cannot afford to have our regulatory responses lag years behind technological developments when those developments are being measured in hundreds of billions of dollars. However we choose to balance the risks and rewards of AI research, we need to act soon. As we saw with the drone sightings that took place before Christmas, the lack of a comprehensive and cohesive framework for managing the threats from new technologies can leave government agencies paralyzed. And with risks that take in anything up to and including the extinction of humanity, we cannot afford this kind of inertia and confusion. We need a comprehensive regulatory framework that balances innovation with safety, one that recognizes both AI’s ransformative potential and its existential dangers. That means: Promoting responsible innovation. Encouraging the development and deployment of AI technologies in critical sectors in a safe and ethical manner. Establishing robust regulations. Public trust in AI systems requires both clear and enforceable regulatory frameworks and transparent systems of accountability. Strengthening national security. Policymakers must leverage AI to modernize military capabilities, deploying AI solutions that predict, detect, and counter cyber threats while ensuring ethical use of autonomous systems. Investing in workforce development. As a nation, we must establish comprehensive training programs that upskill workers for AI-driven industries while enhancing STEM (science, technology, engineering, and math) education to build foundational AI expertise among students and professionals. Leading in global AI standards. The U.S. must spearhead efforts to establish global norms for AI use by partnering with allies to define ethical standards and to safeguard intellectual property. Addressing public concerns. Securing public trust in AI requires increasing transparency about the objectives and applications of AI initiatives while also developing strategies to mitigate job displacement and ensure equitable economic benefits. The Stargate investment represents both the promise and the challenge of AI development. While it demonstrates America’s potential to lead the next technological revolution, it also highlights the urgent need for regulatory frameworks that can match the pace and scale of innovation. With investments of this magnitude reshaping our technological landscape, we cannot afford to get this wrong. We may not get a second chance.


Category: E-Commerce

 

2025-02-12 09:00:00| Fast Company

AI rivalry heats up: Glean CEO Arvind Jain replies to Sam Altmans caution to investors.


Category: E-Commerce

 

Latest from this category

22.02Pokémon cards spiked 20% in value over the past few months. Heres why
22.02Housing market map: Zillow just revised its 2025 home price forecast
22.02Did you get a 1099-K? New IRS rules will impact millions of gig workers and freelancers
22.02National Margarita Day 2025: Shake up your happy hour with these drink deals and a little bit of cocktail history
22.02Im a big believer in reading a room: Kate Aronowitz of Google Ventures on balancing business and creativity
22.02This slick new service puts ChatGPT, Perplexity, and Wikipedia on the map
22.02The next wave of AI is here: Autonomous AI agents are amazingand scary
22.02Apples hidden white noise feature may be just the productivity boost you need
E-Commerce »

All news

23.02Today's Headlines
22.02An XR game trilogy based on Neon Genesis Evangelion is in the works
22.02Plane that flipped over in Canada highlights some of the dangers of holding kids on your lap
22.02The secretive X-37B space plane snapped this picture of Earth from orbit
22.02The creator of My Friend Pedro has a new game on the way, and it looks amazingly weird
22.02What were listening to: Bad Bunny, The Weeknd, FKA twigs and more
22.02ASUS' new mouse has a built-in aromatic oil diffuser
22.02Warren Buffett celebrates Berkshire Hathaway's success over 60 years as CEO while admitting mistakes
More »
Privacy policy . Copyright . Contact form .