|
|||||
A few years ago, Tara Feeners career took an unexpected pivot. Shes spent nearly two decades working on creative tools for companies like Adobe, FiftyThree, WeTransfer, and Vimeo, and was content to keep working in that domain. But then the Browser Company came along, and Feener saw an opportunity to build something even more ambitious. Feenerone of Fast Companys AI 20 honorees for 2025is now the companys head of engineering, overseeing its AI-focused Dia browser and its earlier Arc browser. The browser is suddenly an area of intense interest for AI companies, and Feener understands why: Its the first stop for looking up information, and it’s already connected to the apps and services you use every day. OpenAI and Perplexity both offer their own browsers now, borrowing some Dia features like the ability to summarize across multiple tabs and interrogate your browser history. The Browser Company itself was acquired in September by Atlassian for $610 million, proclaiming that it would transform how work gets done in the AI era. Feener says her team has never felt more creative. We’ve never seen more prototypes flying around, and I think I’m doing my job successfully as a leader here if that motion is happening, she says. This Q&A is part of Fast Companys AI 20 for 2025, our roundup spotlighting 20 of AIs most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers. It has been edited for length and clarity. Howd you end up at the Browser Company? [The Browser Company CEO] Josh Miller started texting me. We were both in that 2013 early New York tech bubble, we had a couple conversations, and he pitched me on the Browser Company. At first I couldn’t connect it to the arc of my career in creativity, but then it just became this infectious idea. I was like, “Wait a minute, I think the browser is actually the largest creative canvas of my entire career. It’s where you live your life and where you create within.” Why does it feel like AI browsers are having a moment right now? I really do believe that the browser is the most compelling, accessible AI layer. It’s the number-one text box you use. And what we do is, as youre typing, we can distinguish a Google search from an assistant or a chat question. In the future, you can imagine other things like taking action or tapping into other search engines. It basically becomes an air traffic control center as you type, and that’s going to help introduce folks to AI just so much faster because you don’t have to go to ChatGPT to ask a question. Thats part one. Part two is just context. We have all of your stuff. We have all of your tabs. We have your cookies. With other AI tools, the barrier to connecting to your other web apps or tools is still high. We get around that with cookies within the browser, so we’re able to just do things like draft your email, or create your calendar event, or tap into your Salesforce workflow. How do you think about which AI features are worth doing? I just see it as another bucket of Play-Doh. I never wanted to do AI for the sake of AI but for leveraging AI in the right moment to do things that would have been really hard for us to do before. A great example is being able to tidy your tabs for you in Arc. There’s a little broom you can click, and it starts sweeping, and it auto-renames, organizes, and tidies up your tabs. We always had ambitions and prototypes, but with large language models, we were able to just throw your tabs at it and say, “Tidy for me. With Arc, it was a lot about tab management. With Dia, we have context, we have memory, we have your cookies, so it’s like we actually own the entire layer. We leverage that as a tool for things like helping you compare your tabs, or rewriting this tab in the voice of this other tab, which is something I do almost every day. Being able to do that all within the browser has just been a huge unlock. Can you elaborate on how Dia taps into users browser histories? Browser history has always been that long laundry list of all the places you’ve been, but actually that long list is context, and nothing is more important in AI than context. Just like TikTok gets better with every swipe, every time you open something in Dia we learn something about you. It’s not in a creepy way, but it helps you tap into your browser history. Just like you can @ mention a tab in Dia and ask a question, like give me my unread emails, with your history you can do things like, Break down my focus time over the past week, or analyze my week and tell me something about myself given my history. We have a bunch of use cases like that in our skills gallery that you can check out, and those are pretty wild. In ChatGPT and other chat tools, it feels like you have to give a lot to build up that context body. Were able to tap into that as a tool in a very direct way. Some AI browsers offer agent features that can navigate through web pages on your behalf. Will Dia ever browse the web for you? We’ve done a bunch of prototypes and for us, the experience of just literally going off and browsing for you and clicking through web pages hasn’t felt yet fast enough or seamless enough. We’re all over it in terms of making sure we’re harnessing it at the right moment and the right way when we think it’s ready. We don’t want to hide the web or replace the web. Something I like to say about Dia is that we want to be one arm around you and one arm around the internet. And it’s like, how can we make tapping into your context in your browser feel the same way it would feel to write a document, or even just to create something with plain, natural language? I think that’s like the most powerful thing. Its like the same feeling I had when I was young and tapped into Flash, and that people had with HTML. With AI, literally my mom can write a sentence like, “turn this New York Times recipe into a salad,” and in some way she’s created an app that does some kind of transformation. And that just gets me really excited.
Category:
E-Commerce
The healthcare industry faces major challenges in creating new drugs that can improve outcomes in the treatment of all kinds of diseases. New generative AI models could play a major role in breaking through existing barriers, from lab research to successful clinical trials. Eventually, even AI-powered robots could help in the cause. Nvidia VP of healthcare Kimberly Powell, one of Fast Companys AI 20 honorees, has led the companys health efforts for 17 years, giving her a big head start on understanding how to turn AIs potential to improve our well-being into reality. Since it’s likely that everything from drug-discovery models to robotic healthcare aides would be powered by Nvidia chips and software, shes in the right place to have an impact. This Q&A is part of Fast Companys AI 20 for 2025, our roundup spotlighting 20 of AIs most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers. It has been edited for length and clarity. A high percentage of drugs make it to clinical trials and then fail. How can new frontier models using lots of computing power help us design safer and more effective drugs? Drug discovery is an enormous problem. It’s a 10-year journey at best. It costs several billions to get a drug to market. Back in 2017, very shortly after the transformer [generative AI model] was invented to deal with text and language, it was applied by the DeepMind team to proteins. And one of the most consequential contributions to healthcare today is still [DeepMinds] invention of AlphaFold. Everything that makes [humans] work is based on proteins and how they fold and their physical structure. We need to study that, [because] you might build a molecule that changes or inhibits the protein from folding the wrong way, which is the cause of disease. So instead of using the transformer model to predict words, they used a transformer to predict the effects of a certain molecule on a protein. It allowed the world to see that its possible to represent the world of drugs in a computer. And the world of drugs really starts with human biology. DNA is represented. After you take a sample from a human, you put it through a sequencing machine and what comes out is a 3 billion character sequence of lettersA‘s, C‘s, T‘s, and G‘s. Luckily, transformer models can be trained on this sequence of characters and learn to represent them. DNA is represented in a sequence of characters. Proteins are represented in a sequence of characters. So how will this new approach end up giving us breakthrough drugs? If you look at the history of drug discovery, we’ve been kind of circling around the same targetsthe target is the thing that causes the disease in the first placefor a very long time. And we’ve largely exhausted the drugs for those targets. We know biology is more complex than any one singular target. It’s probably multiple targets. And that’s why cancer is so hard, because it’s many things going wrong in concert that actually cause cancer and cause different people to respond to cancer differently. Once we’ve cracked the biology, and we’ve understood more about these multiple targets, molecular design is the other half of this equation. And so similarly, we can use the power of generative models to generate ideas that are way outside a chemist’s potential training or even their imagination. It’s a near infinite search space. These generative models can open our aperture. I imagine that modeling this vast new vocabulary of biology places a whole new set of requirements on the Nvidia chips and infrastructure. We have to do a bunch of really intricate data science work to apply this [transformer] method to these crazy data domains. Because we’re [going from] the language model and [representing] these words that are just short little sequences to representing sequences that are 3 billion [characters] long. So things like context lengthhow much context length is how much information can you put into a prompthas to be figured out for these long proteins and DNA strings. We have to do a lot of tooling and invention and new model architectures that have transformers at the core. That’s why we work with the community to really figure out what are the new methods or the new tooling we have to build so that new models can be developed for this domain. That’s in the area of really understanding biology better. Can you say more about the company youre working with that is using digital twins to simulate an expensive clinical trial before the trial begins? ConcertAI is doing exactly that. They specialize in oncology. They simulate the clinical trials so they can make the best decisions. They can see if they don’t have enough patients, or patients of the right type. They can even simulate it, depending on where the site selection is, to predict how likely the patients are to stay on protocol. Keeping the patients adhering to the clinical trial is a huge challenge, because not everybody has access to transportation or enough capabilities to take off work. They build that a lot into their model so that they can try to set up the clinical trial for its best success factors. How might AI agents impact healthcare? You have these digital agents who are working in the computer and working on all the information. But to really imagine changing how healthcare is delivered, we’re going to need these physical agents, which I would call robots, that can actually perform physical tasks. You can think about the deployment of robots, everything from meeting and greeting a patient at the door, to delivering sheets or a glass of ice chips to a patient room, to monitoring a patient while inside a room, all the way through to the most challenging of environments, which is the operating room with surgical robotics. Nvidia sells chips, but I think what I’ve heard in your comments is a whole tech stack, including in healthcare. There are models, there are software layers, things like that. I’ve been at the company 17 years working on healthcare, and it’s not because healthcare lives in a chip. We build full systems. There are the operating systems, there are the AI models, there are the tools. And a model is never doneyou have to be constantly improving it. Through every usage of that model, you’re learning something, and you’ve got to make sure that that agent or model is continuously improving. We’ve got to create whole computing infrastructure systems to serve that.
Category:
E-Commerce
Last year, OpenAI decided it had to pay more attention to its power users, the ones with a knack for discovering new uses for AI: doctors, scientists, and coders, along with companies building their own software around OpenAIs API. And so the company turned to post-training research lead Michelle Pokrass to spin up a team to better understand them. The AI field is moving so quickly, the power-user use cases of today are really the median-user use cases a year from now, or two years from now, Pokrass says. Its really important for us to stay on the leading edge and build to where capabilities are emerging, rather than just focusing on what people are using the models for now. Pokrass, a former software engineer for Coinbase and Clubhouse, came to OpenAI in 2022, fully sold on AI after experiencing the magic of coding tools such as GitHub Copilot. She played key roles in developing OpenAIs GPT-4.1 and GPT-5, and now she focuses on testing and tweaking models based on users who are pushing AI to its limits. Specifically, Pokrasss team works on post-training, a process that helps large language models understand the spirit of user requests. This refining allows ChatGPT to code, say, a fully polished to-do list app rather than just instructions on how to theoretically make one. Theres been lots of examples of GPT-5 helping with scientific breakthroughs, or being able to discover new mathematical proofs, or working on important biological problems in healthcare, saving doctors and specialists a lot of time, Pokrass says. These are examples of exactly the kinds of capabilities we want to keep pushing. Creating a team with this niche focus is unusual among Big Tech companies, which tend to target broad audiences they can monetize at scale through, say, targeted ads. Catering to power users isnt a revenue play, Pokrass says, even if many pay $200 per month for ChatGPT Pro subscriptions. Instead, its a way to assess the why of AI, with power users pointing to unforeseen opportunities. With traditional tech, its usually clear how people will use a product a few years down the road, Pokrass says. With AI, were all discovering with our users, live, what exactly is highest utility, and how people can get value out of this. Eventually, OpenAI figures those use cases will help inform the features that it builds for everyone else. Pokrass gives the example of medical professionals using AI in their decision-making, which in turn could help ChatGPT better understand the kind of medical questions people are asking it (for better or worse). Theres always work for this team, because as we push boundaries for what our models can do, the frontier just gets moved out, and then we start to see an influx of new activity of people using these new capabilities, Pokrass says. This profile is part of Fast Companys AI 20 for 2025, our roundup spotlighting 20 of AIs most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers.
Category:
E-Commerce
Andreessen Horowitz investors (and identical twins) Justine and Olivia Moore have been in venture capital since their undergraduate days at Stanford University, where, in 2015, they cofounded an incubator called Cardinal Ventures to help students pursue business ideas while still in school. Founding it also gave the Moores an entry point into the broader VC industry. The thing about starting a startup incubator at Stanford is all the VCs want to meet you, even if you have no idea what youre doing, which we did not back then, Olivia says. At the time, the app economy was booming, and services around things like food delivery and dating proliferated, recalls Justine. But that energy pales in comparison to the excitement around AI the sisters now experience at Andreessen Horowitz. Theres so many more opportunities in terms of what people are able to build than what were able to invest in, she says. To identify the right opportunities, the Moores track business data such as paid conversion rates and closely examine founders backgroundswhether theyve worked at a cutting-edge AI lab or deeply studied the needs of a particular industry. They attend industry conferences, stay current on the latest AI research papers, and, perhaps most critically, spend significant time testing AI-powered products. That means going beyond staged demos to see what tools can actually do and spotting founders who quickly intuit user needs and add features accordingly. From using the products, you get a pretty quick, intuitive sense of how much of something is marketing hype, says Olivia, whose portfolio includes supply chain and logistics operations company HappyRobot and creative platform Krea.The sisters also value Andreessen Horowitzs scale, which allows the firm to stick to its convictions rather than chase trends, and its track record of supporting founders beyond simply investing. (Andreessen Horowitz is reportedly seeking to raise $20 billion to support its AI-focused investments.) Its most fun to do this job when you can work with the best founders and when you can actually really help them with the core stuff that theyre struggling with, theyre working on, or striving to do in their business, says Justine, a key early investor in voice-synthesis technology company ElevenLabs. Though the sisters live together and work at the same firm, where they frequently bounce ideas off each other, theyve carved out their own lanes. Olivia focuses more on AI applications, while Justine spends more time on AI infrastructure and foundational models. At this point, they say, its not unheard of for industry contacts to not even realize theyre related. If I see [her] on a pitch meeting in any given day, thats maybe more of the exception than the rule, Justine says. This profile is part of Fast Companys AI 20 for 2025, our roundup spotlighting 20 of AIs most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers.
Category:
E-Commerce
What if the chatbots we talk to every day actually felt something? What if the systems writing essays, solving problems, and planning tasks had preferences, or even something resembling suffering? And what will happen if we ignore these possibilities? Those are the questions Kyle Fish is wrestling with as Anthropics first in-house AI welfare researcher. His mandate is both audacious and straightforward: Determine whether models like Claude can have conscious experiences, and, if so, how the company should respond.Were not confident that there is anything concrete here to be worried about, especially at the moment, Fish says, but it does seem possible. Earlier this year, Anthropic ran its first predeployment welfare tests, which produced a bizarre result: Two Claude models, left to talk freely, drifted into Sanskrit and then meditative silence as if caught in what Fish later dubbed a spiritual bliss attractor.Trained in neuroscience, Fish spent years in biotech, cofounding companies that used machine learning to design drugs and vaccines for pandemic preparedness. But he found himself drawn to what he calls pre-paradigmatic areas of potentially great importancefields where the stakes are high but the boundaries are undefined. That curiosity led him to cofound a nonprofit focused on digital minds, before Anthropic recruited him last year.Fishs role didnt exist anywhere else in Silicon Valley when he started at Anthropic. To our knowledge, Im the first one really focused on it in an exclusive, full-time way, he says. But his job reflects a growing, if still tentative, industry trend: Earlier this year, Google went about hiring post-AGI scientists tasked partly with exploring machine consciousness.At Anthropic, Fishs work spans three fronts: running experiments to probe model welfare, designing practical safeguards, and helping shape company policy. One recent intervention gave Claude the ability to exit conversations it might find distressing, a small but symbolically significant step. Fish also spends time thinking about how to talk publicly about these issues, knowing that for many people the very premise sounds strange.Perhaps most provocative is Fishs willingness to quantify uncertainty. He estimates a 20% chance that todays large language models have some form of conscious experience, though he stresses that consciousness should be seen as a spectrum, not binary. Its a kind of fuzzy, multidimensional combination of factors, he says.For now, Fish insists the field is only scratching the surface.Hardly anybody is doing much at all, us included, he admits. His goal is less to settle the question of machine consciousness than to prove it can be studied responsibly and to sketch a road map others might follow. This profile is part of Fast Companys AI 20 for 2025, our roundup spotlighting 20 of AIs most innovative technologists, entrepreneurs, corporate leaders, and creative thinkers.
Category:
E-Commerce
Sites : [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] next »