|
|||||
The low point in Palantirs very first quest for investors came during a pitch meeting in 2004 that CEO Alex Karp and some colleagues had with Sequoia Capital, which was arguably the most influential Silicon Valley VC firm. Sequoia had been an early investor in PayPal; its best-known partner, Michael Moritz, sat on the companys board and was close to PayPal founder Peter Thiel, who had recently launched Palantir. But Sequoia proved no more receptive to Palantir than any of the other VCs that Karp and his team visited; according to Karp, Moritz spent most of the meeting absentmindedly doodling in his notepad. Karp didnt say anything at the time, but later wished that he had. I should have told him to go fuck himself, he says, referring to Moritz. But it wasnt just Moritz who provoked Karps ire: the VC communitys lack of enthusiasm for Palantir made Karp contemptuous of professional investors in general. It became a grudge that he nurtured for years after. [Image: Avid Reader Press] From The Philosopher in the Valley: Alex Karp, Palantir, and the Rise of the Surveillance State by Michael Steinberger. Copyright 2025. Reprinted by permission of Avid Reader Press, an Imprint of Simon & Schuster Inc. But the meetings on Sand Hill Road werent entirely fruitless. After listening to Karps pitch and politely declining to put any money into Palantir, a partner with one venture capital firm had a suggestion: if Palantir was really intent on working with the government, it could reach out to In-Q-Tel, the CIAs venture capital arm. In-Q-Tel had been started a few years earlier, in 1999 (the name was a playful reference to Q, the technology guru in James Bond films). CIA Director George Tenet believed that establishing a quasi-public venture capital fund through which the agency could incubate start-ups would help ensure that the U.S. intelligence community retained a technological edge. The CIA had been created in 1947 for the purpose of preventing another Pearl Harbor, and a half century on, its primary mission was still to prevent attacks on American soil. Two years after In-Q-Tel was founded, the country experienced another Pearl Harbor, the 9 11 terrorist attacks, a humiliating intelligence failure for the CIA and Tenet. At the time, In-Q-Tel was working out of a Virginia office complex known, ironically, as the Rosslyn Twin Towers, and from the twenty-ninth-floor office, employees had an unobstructed view of the burning Pentagon. In-Q-Tels CEO was Gilman Louie, who had worked as a video game designer before being recruited by Tenet (Louie specialized in flight simulators; his were so realistic that they were used to help train Air National Guard pilots). Ordinarily, Louie did not take part in pitch meetings; he let his deputies do the initial screening. But because Thiel was involved, he made an exception for Palantir and sat in on its first meeting with In-Q-Tel. What Karp and the other Palantirians didnt know when they visited In-Q-Tel was that the CIA was in the market for new data analytics technology. At the time, the agency was mainly using a program called Analysts Notebook, which was manufactured by i2, a British company. According to Louie, Analysts Notebook had a good interface but had certain deficiencies when it came to data processing that limited its utility. We didnt think their architecture would allow us to build next-generation capabilities, Louie says. Louie found Karps pitch impressive. Alex presented well, he recalls. He was very articulate and very passionate. As the conversation went on, Karp and his colleagues talked about IGOR, PayPals pioneering fraud-detection system, and how it had basically saved PayPals business, and it became apparent to Louie that they might just have the technical aptitude to deliver what he was looking for. But he told them that the interface was vitalthe software would need to organize and present information in a way that made sense for the analysts using it, and he described some of the features they would expect. Louie says that as soon as he brought this up, the Palantir crew got out of sales mode and immediately switched into engineering solving mode and began brainstorming in front of the In-Q-Tel team. That was what I wanted to see, says Louie. He sent them away with a homework assignment: he asked them to design an interface that could possibly appeal to intelligence analysts. On returning to Palo Alto, Stephen Cohen, one of Palantirs co-founders, then 22 years old, and an ex-PayPal engineer named Nathan Gettings sequestered themselves in a room and built a demo that included the elements that Louie had highlighted. A few weeks later, the Palantirians returned to In-Q-Tel to show Louie and his colleagues what they had come up with. Louie was impressed by its intuitive logic and elegance. If Palantir doesnt work, you guys have a bright future designing video games, he joked. In-Q-Tel ended up investing $1.25 million in exchange for equity; with that vote of confidence, Thiel put up another $2.84 million. (In-Q-Tel did not get a board seat in return for its investment; even after Palantir began attracting significant outside money, the company never gave up a board seat, which was unusual, and to its great advantage.) Karp says the most beneficial aspect of In-Q-Tels investment was not the money but the access that it gave Palantir to the CIA analysts who were its intended customers. Louie believed that the only way to determine whether Palantir could really help the CIA was to embed Palantir engineers in the agency; to build software that was actually useful, the Palantirians needed to see for themselves how the analysts operated. A machine is not going to understand your workflows, Louie says. Thats a human function, not a machine function. The other reason for embedding the engineers was that it would expedite the process of figuring out whether Palantir could, in fact, be helpful. If the CIA analysts didnt think Palantir was capable of giving them what they needed, they were going to quickly let their superiors know. We were at war, says Louie, and people did not have time to waste. Louie had the Palantir team assigned to the CIAs terrorism finance desk. There they would be exposed to large data sets, and also to data collected by financial institutions as well as the CIA. This would be a good test of whether Karp and his colleagues could deliver: tracking the flow of money was going to be critical to disrupting future terrorist plots, and it was exactly the kind of task that the software would have to perfrm in order to be of use to the intelligence community. But Louie also had another motive: although Karp and Thiel were focused on working with the government, Louie thought that Palantirs technology, if it proved viable, could have applications outside the realm of national security, and if the company hoped to attract future investors, it would ultimately need to develop a strong commercial business. Stephen Cohen and engineer Aki Jain worked directly with the CIA analysts. Both had to obtain security clearance, and over time, numerous other Palantirians would do the same. Some, however, refusedthey worried about Big Brother, or they didnt want the FBI combing through their financial records, or they enjoyed smoking pot and didnt want to give it up. Karp was one of the refuseniks, as was Joshua Goldenberg, the head of design. Goldenberg says there were times when engineers working on classified projects needed his help. But because they couldnt share certain information with him, they would resort to hypotheticals. As Goldenberg recalls, Someone might say, Imagine theres a jewel thief and hes stolen a diamond, and hes now in a city and we have people following himwhat would that look like? What tools would you need to be able to do that? Starting in 2005, Cohen and Jain traveled on a biweekly basis from Palo Alto to the CIAs headquarters in Langley, Virginia. In all, they made the trip roughly two hundred times. They became so familiar at the CIA that analysts there nicknamed Cohen Two Weeks. The Palantir duo would bring with them the latest version of the software, the analysts would test it out and offer feedback, and Cohen and Jain would return to California, where they and the rest of the team would address whatever problems had been identified and make other tweaks. In working side by side with the analysts, Cohen and Jain were pioneering a role that would become one of Palantirs signatures. It turned out that dispatching software engineers to job sites was a shrewd strategyit was a way of discovering what clients really needed in the way of technological help, of developing new features that could possibly be of use to other customers, and of building relationships that might lead to additional business within an organization. The forward-deployed engineers, as they came to be called, proved to be almost as essential to Palantirs eventual success as the software itself. But it was that original deployment to the CIA, and the iterative process that it spawned, that enabled Palantir to successfully build Gotham, its first software platform. Ari Gesher, an engineer who was hired in 2005, says that from a technology standpoint, Palantir was pursuing a very ambitious goal. Some software companies specialized in front-end productsthe stuff you see on your screen. Others focused on the back-end, the processing functions. Palantir, says Gesher, understood that you needed to do deep investments in both to generate outcomes for users. According to Gesher, Palantir also stood apart in that it aimed to be both a product company as well as a service company. Most software makers were one or the other: they either custom-built software, or they sold off-the-shelf products that could not be tailored to the specific needs of a client. Palantir was building an off-the-shelf product that could also be customized. Despite his lack of technical trainingor, perhaps, because of itKarp had also come up with a novel idea for addressing worries about civil liberties: he asked the engineers to build privacy controls into the software. Gotham was ultimately equipped with two guardrailsusers were able to access only information that they were authorized to view, and the platform generated an audit trail that indicated if someone tried to obtain material off-limits to them. Karp liked to call it a Hegelian remedy to the challenge of balancing public safety and civil liberties, a synthesis of seemingly unreconcilable objectives. As he told Charlie Rose during an interview in 2009, It is the ultimate Silicon Valley solution: you remove the contradiction, and we all march forward. In the end, it took Palantir around three years, lots of setbacks, and a couple of near-death experiences to develop a marketable software platform that met these parameters. There were moments where we were like, Is this ever going to see the light of day? Gesher says. The work was arduous, and there were times when the money ran short. A few key people grew frustrated and talked of quitting. Palantir also struggled to win converts at the CIA. Even though In-Q-Tel was backing Palantir, analysts were not obliged to switch to the companys software, and some who tried it were underwhelmed. But in what would become another pattern in Palantirs rise, one analyst was not just won over by the technology; she turned into a kind of in-house evangelist on Palantirs behalf. Sarah Adams discovered Palantir not at Langley, but rather on a visit to Silicon Valley in late 2006. Adams worked on counterterrorism, as well, but in a different section. She joined a group of CIA analysts at a conference in the Bay Area devoted to emerging technologies. Palantir was one of the vendors, and Stephen Cohen demoed its software. Adams was intrigued by what she saw, exchanged contact information with Cohen, and upon returning to Langley asked her boss if her unit could do a pilot program with Palantir. He signed off on it, and a few months later, Adams and her colleagues were using Palantirs software. Adams says that the first thing that jumped out at her was the speed with which Palantir churned data. We were a fast-moving shop; we were kind of the point of the spear, and we needed faster analytics, she says. According to Adams, Palantirs software also had a smartness that Analysts Notebook lacked. It wasnt just better at unearthing connections; even its basic search function was superior. Often, names would be misspelled in reports, or phone numbers would be written in different formats (dashes between numbers, no dashes between numbers). If Adams typed in David Petraeus, Palantirs search engine would bring up all the available references to him, including ones where his name had been incorrectly spelled. This ensured that she wasnt deprived of possibly important information simply because another analyst or a source in the field didnt know that it was Petraeus. Beyond that, Palantirs software just seemed to reflect an understanding of how Adams and other analysts did their jobsthe kind of questions they were seeking to answer, and how they wanted the answers presented. She says that Palantir made my job a thousand times easier. It made a huge difference. Her advocacy was instrumental in Palantir securing a contract with the CIA. Similar stories would play out in later deploymentsone employee would end up championing Palantir, and that persons proselytizing would eventually lead to a deal. But the CIA was the breakthrough: it was proof that Palantir had developed software that really worked, and also the realization of the ambition that had brought the company into being. Palantir had been founded by Peter Thiel for the purpose of assisting the U.S. government in the war on terrorism, and now the CIA had formally enlisted its help in that battle. Palantirs foray into domestic law enforcement was an extension of its counterterrorism work. In 2007, the New York City Police Departments intelligence unit began a pilot program using Palantirs software. Before 9/11, the intelligence division had primarily focused on crie syndicates and narcotics. But its mandate changed after the terrorist attacks. The city tapped David Cohen, a CIA veteran who had served as the agencys deputy director of operations, to run the unit, and with the citys blessing, he turned it into a full-fledged intelligence service employing some one thousand officers and analysts. Several dozen members of the team were posted overseas, in cities including Tel Aviv, Amman, Abu Dhabi, Singapore, London, and Paris. The rationale for the N.Y.P.D.s transformation after September 11th had two distinct facets, The New Yorkers William Finnegan wrote in 2005. On the one hand, expanding its mission to include terrorism prevention made obvious sense. On the other, there was a strong feeling that federal agencies had let down New York City, and that the city should no longer count on the Feds for its protection. Finnegan noted that the NYPD was encroaching on areas normally reserved for the FBI and the CIA but that the federal agencies had silently acknowledged New Yorks right to take extraordinary defensive measures. Cohen became familiar with Palantir while he was still with the CIA, and he decided that the companys software could be of help to the intelligence unit. In what was becoming a familiar refrain, there was internal resistance. For the average cops, it was just too complicated, says Brian Schimpf, one of the first forward-deployed engineers assigned to the NYPD. Theyd be like, I just need to look up license plates, bro; I dont need to be doing these crazy analytical processes. IBMs technology was the de facto incumbent at the NYPD, which also made it hard to convert people. Another stumbling block was price: Palantir was expensive, and while the NYPD had an ample budget, not everyone thought it was worth the investment. But the software caught on with some analysts, and over time, what began as a counter terrorism deployment moved into other areas, such as gang violence. This mission creep was something that privacy advocates and civil libertarians anticipated. Their foremost worry, in the aftermath of 9/11, was that innocent people would be ensnared as the government turned to mass surveillance to prevent future attacks, and the NSA scandal proved that these concerns were warranted. But another fear was that tools and tactics used to prosecute the war on terrorism would eventually be turned on Americans themselves. The increased militarization of police departments showed that defending the homeland had indeed morphed into something more than just an effort to thwart jihadis. Likewise, police departments also began to use advanced surveillance technology. Andrew Guthrie Ferguson, a professor of law at George Washington University who has written extensively about policing and technology, says that capabilities that had been developed to meet the terrorism threat were now being redirected on the domestic population. Palantir was part of this trend. In addition to its work with the NYPD, it provided its software to the Cook County Sheriffs Office (a relationship that was part of a broader engagement with the city and that would dissolve in controversy). However, it attracted much of its police business in its own backyard, California. The Long Beach and Burbank Police Departments used Palantir, as did sheriff departments in Los Angeles and Sacramento counties. The companys technology was also used by several Fusion Centers in Californiathese were regional intelligence bureaus established after 9/11 to foster closer collaboration between federal agencies and state and local law enforcement. The focus was on countering terrorism and other criminal activities. But Palantirs most extensive and longest-lasting law enforcement contract was with the Los Angeles Police Department. It was a relationship that began in 2009. The LAPD was looking for software that could improve situational awareness for officers in the fieldthat could allow them to quickly access information about, say, a suspect or about previous criminal activity on a particular street. Palantirs technology soon became a general investigative tool for the LAPD. The department also started using Palantir for a crime-prevention initiative called LASER. The goal was to identify hot spotsstreets and neighborhoods that experienced a lot of gun violence and other crimes. The police would then put more patrols in those places. As part of the stepped-up policing, officers would submit information about people they had stopped in high-crime districts to a Chronic Offenders Bulletin, which flagged individuals whom the LAPD thought were likely to be repeat offenders. This was predictive policing, a controversial practice in which quantitative analysis is used to pinpoint areas prone to crime and individuals who are likely to commit or fall victim to crimes. To critics, predictive policing is something straight out of the Tom Cruise thriller Minority Report, in which psychics identify murderers before they kill, but even more insidious. They believe that data-driven policing reinforces biases that have long plagued Americas criminal justice system and inevitably leads to racial profiling. Karp was unmoved by that argument. In his judgment, crime was crime, and if it could be prevented or reduced through the use of data, that was a net plus for society. Blacks and Latinos, no less than whites, wanted to live in safe communities. And for Karp, the same logic that guided Palantirs counterterrorism work applied to its efforts in law enforcementpeople needed to feel safe in their homes and on their streets, and if they didnt, they would embrace hard-line politicians who would have no qualms about trampling on civil liberties to give the public the security it demanded. Palantirs software, at least as Karp saw it, was a mechanism for delivering that security without sacrificing privacy and other personal freedoms. However, community activists in Los Angeles took a different view of Palantir and the kind of police work that the company was enabling. An organization called the Stop LAPD Spying Coalition organized protests and also published studies highlighting what it claimed was algorithmic-driven harassment of predominantly black and Latino neighborhoods and of people of color. LASER, it said, amounted to a racist feedback loop. In the face of criticism, the LAPD grew increasingly sensitive about its predictive policing efforts and its ties to Palantir. [Photo: Ryoji Iwata/Unsplash] To Karp, the fracas over Palantirs police contracts was emblematic of what he saw as the lefts descent into mindless dogmatism. He said that many liberals now seemed to reject quantification of any kind. And I dont understand how being anti-quantitative is in any way progressive. Karp said that he was actually the true progressive. If you are championing an ideology whose logical consequence is that thousands and thousands and thousands of people over time that you claim to defend are killed, maimed, go to prisonhow is what Im saying not progressive when what you are saying is going to lead to a cycle of poverty? He conceded, though, that partnering with local law enforcement, at least in the United States, was just too complicated. Police departments are hard because you have an overlay of legitimate ethical concerns, Karp said. I would also say there is a politicization of legitimate ethical issues to the detriment of the poorest members of our urban environments. He acknowledged, too, that the payoff from police work wasnt enough to justify the agita that came with it. And in truth, there hadnt been much of a payoff; indeed, Palantirs technology was no longer being used by any U.S. police departments. The New York City Police Department had terminated its contract with Palantir in 2017 and replaced the companys software with its own data analysis tool. In 2021, the Los Angeles Police Department had ended its relationship with Palantir, partly in response to growing public pressure. So had the city of New Orleans, after an investigation by The Verge caused an uproar. But Palantir still had contracts with police departments in several European countries. And since 2014, Palantirs software has been used in domestic operations by U.S. Immigration and Customs Enforcement, work that has expanded under the second Trump administration, and earned criticism from a number of former employees. In 2019, when I was working on my story about Palantir for The New York Times Magazine, I tried to meet with LAPD officials to talk about the companys software, but they declined. Six years earlier, however, a Princeton doctoral candidate named Sarah Brayne, who was researching the use of new technologies by police departments, was given remarkable access to the LAPD. She found that Palantirs platform was used extensivelymore than one thousand LAPD employees had access to the softwareand was taking in and merging a wide range of data, from phone numbers to field interview cards (filed by police every time they made a stop) to images culled from automatic license plate readers, or ALPRs. Through Palantir, the LAPD could also tap into databases of police departments in other jurisdictions, as well as those of the California state police. In addition, they could pull up material that was completely unrelated to criminal justicesocial media posts, foreclosure notices, utility bills. Via Palantir, the LAPD could obtain a trove of personal information. Not only that: through the network analysis that the software performed, the police could identify a person of interests family members, friends, colleagues, associates, and other relations, putting all of them in the LAPDs purview. It was a virtual dragnet, a point made clear by one detective who spoke to Brayne. Lets say I have something going on with the medical marijuana clinics where theyre getting robbed, he said. I can put in an alert to Palantir that says anything that has to do with medical marijuana plus robbery plus male, black, six foot. He readily acknowledged that these searches could just be fishing expeditions and even used a fishing metaphor. I like throwing the net out there, you know? he said. Braynes research showed the potential for abuse. It was easy, for instance, to conjure nightmare scenarios involving ALPR data. A detective could discover that a reluctant witness was having an affair and use that information to coerce his testimony. There was also the risk of misconduct outside the line of dutyan unscrupulous analyst could conceivably use Palantirs software to keep tabs on his ex-wifes comings and goings. Beyond that, millions of innocent people were unknowingly being pulled into the system simply by driving their cars. When I spoke to Brayne, she told me that what most troubled her about the LAPDs work with Palantir was the opaqueness. Digital surveillance is invisible, she said. How are you supposed to hold an institution accountable when you dont know what they are doing? Adapted from The Philosopher in the Valley: Alex Karp, Palantir, and the Rise of the Surveillance State by Michael Steinberger. Copyright 2025. Reprinted by permission of Avid Reader Press, an Imprint of Simon & Schuster Inc.
Category:
E-Commerce
In the modern working world, employees have a lot on their minds. From stressing about high costs of living and pressing political issues, there are no shortage of worries to go around. But worries at work are stacking up, too, with many feeling uncertain about their future employment in the face of AI. While workplaces are seeing some benefits to automating tasks with AI, there’s another not-so-secret problem with the technology taking off: employee anxiety. In part, that’s because workers are deeply stressed about being replaced, but there are also learning curves that come with working alongside the technology. Also notable, one recent study found that AI is making workers’ jobs harder in another way. It messes with managers’ expectations, meaning they end up giving employees more work that they expect completed in less time. Holding space for AI-xiety In the face of such significant change, some say that leaders have a new job to do: They need to hold space for all the anxiety around AI, or, AI-xiety, if you will. Heidi Brooks, a leadership expert and senior lecturer in organizational behavior at the Yale School of Management, tells Fast Company that because anxiety is now “a central part of the workplace experience,” leaders need to meet the moment. But it’s not necessarily about trying to calm or settle worries, and it’s definitely not about ignoring them altogether. Instead, it’s about being present. “Presence isnt just about showing upits about how you show up, Brooks explains. “Its the groundedness, the way you stay in touch with people in the midst of ambiguity or distress, without rushing to fix or smooth things over.” Brooks adds that while it may feel more comfortable to avoid the worries, “choosing to stay steady in the face of uncertainty is a quiet but powerful form of leadership.” Communication is key As concerns around AI are booming, at the same time issues like burnout are skyrocketing. It’s no secret that many employees are feeling unsettled. That means bosses need to do more than just say they’re there for workers. As Brooks puts it, “Presence is in the eyes of the beholder. Therefore, employees have to feel that from you. “Communication, in this anxious context, becomes more than just information-sharing. Its a form of containment, Brooks says. Silence can promote fear, and in the absence of communication, people can fill the gaps with worst-case scenarios.” Therefore, even if leaders aren’t necessarily sure themselves how to fix the issues employees are worried about, keeping communication open is, in itself, still an effective tool. Recent research supports the expert’s insight, too: A recent survey on frontline workers in the AI age found that while only 17% of said their organization is transparent about AI integration, 63% said communication about the technology is essential. If you explain it, well accept it, one worker said. If you dont, well resist. Brooks says employers don’t need to have all the answers to be good communicators and to calm fears. “Its not about false certainty,” she explains. Its about helping people feel less alone in the uncertainty, and perhaps even inviting them to be part of the learning process by inviting their voice.” Leaders need check-ins, too Undoubtedly, leaders are in a new era, too. They have big challenges ahead of them as they learn to work with automation. Brooks says leaders are also learning to “hold space for human experience . . . as we find our way forward” in the AI age. But not only do leaders have to worry about their teamsthey also need to check in with themselves, especially around their own anxieties and struggles when it comes to new technology. “Its a good time not only to be intentional about touching base with people on your teams, but for you to do the same for yourself,” Brooks says. Leaders, then, also need the space to air their own fearsin addition to being a sounding board for others. Brooks adds, “When we can be real about naming what we are going through, we are often wiser together, because we can discuss whats happening and learn our way forward.”
Category:
E-Commerce
A snaggle tooth. A gap in someones smile. A birthmark or mole. What do each of these facial features have in common? They all have wabi-sabi. Thats according to TikToks latest trend, which has users highlighting their imperfections and labeling them wabi-sabi.” Not to be confused with the sushi accompaniment, wabi-sabi is a Japanese aesthetic philosophy that finds beauty in imperfection and the natural process of agingsomething we could all use a little more of in the age of preventative facelifts. The concept celebrates imperfection and the natural wear and tear that occurs with the passage of time, whether thats a gently worn step, a chipped mug, or smile lines. A sound featuring the term has since gone viral on TikTok, introducing many to the idea for the first time. Nearly half a million videos have been posted under the viral audio, originating from the animated sitcom King of the Hill. In one episode, the character Bobby Hill picks up a rose and says, I like how mines a little off-center. Its got wabi-sabi. That clip has since been repurposed by users celebrating everything from crooked teeth to aquiline noses as wabi-sabi. As with any concept that takes off on TikTok, some of the subtlety of the original philosophy has been lost as it spreads online. Yet, in an age of unrealistic beauty standards, looksmaxxing, and aesthetic micro trends, one that celebrates individuality and acceptance of perceived flaws is a step in the right direction. In 2019, The New Yorker declared it “The Age of Instagram Face.” Six years on, Dazed wrote We have entered the age of TikTok Face. Aesthetic inflation or the normalization of more and more extreme cosmetic interventions over time, defined by Flesh World writer Jessica Defino, has eaten our collective brain. In just the past few weeks, headlines about the skinny BBL (thats Brazilian butt lift, for the unitiated) and facelifts at 28 demonstrate the pervasiveness of the pursuit of aesthetic perfection. Its a vicious feedback loop that also bleeds into our offline reality, with plenty of research finding a correlation between time spent online and desire for plastic surgery. If youve noticed the faces you see online slowly morphing into one and the same, you are not losing your mind. These days, we could all stand to embrace more wabi-sabi.
Category:
E-Commerce
All news |
||||||||||||||||||
|
||||||||||||||||||