Hello, and welcome to Decoder! I’m Jon Fortt — CNBC journalist, cohost of Closing Bell: Overtime, and creator of the Fortt Knox streaming series on LinkedIn. This is the last episode I’ll be guest-hosting for Nilay while he’s out on parental leave. We have an exciting crew who will take over for me after that, so stay tuned.
Today, I’m talking with Richard Robinson, who is the cofounder and CEO of Robin AI. Richard has a fascinating resume: he was a corporate lawyer for high-profile firms in London before founding Robin in 2019 to bring AI tools to the legal profession, using a mix of human lawyers and automated software expertise. That means Robin predates the big generative AI boom that kicked off when ChatGPT launched in 2022.
Listen to Decoder, a show hosted by The Verge’s Nilay Patel about big ideas — and other problems. Subscribe here!
As you’ll hear Richard say, the tools his company was building early on were based on fairly traditional AI technology — what we would have just called “machine learning” a few years ago. But as more powerful models and the chatbot explosion have transformed industries of all types, Robin AI is expanding its ambitions. It’s moving beyond just using AI to parse legal contracts into what Richard is envisioning as an entire AI-powered legal services business.
AI can be unreliable, though, and when you’re working in law, unreliable doesn’t really cut it. It’s impossible to keep count of how many headlines we’ve already seen about lawyers using ChatGPT when they shouldn’t, citing nonexistent cases and law in their filings. Those attorneys have faced not only scathing rebukes from judges but also in some cases even fines and sanctions.
Naturally, I had to ask Richard about hallucinations, how he thinks the industry could move forward here, and how he’s working to make sure Robin’s AI products don’t land any law firms in hot water.
But Richard’s background also includes professional debate. Richard was the head debate coach at Eton College. So much of his expertise here, right down to how he structures his answers to some of my questions, can be traced back to just how experienced he is with the art of argumentation.
So, I really wanted to spend time talking through Richard’s history with debate, how it ties into both the AI and legal industries, and how these new technologies are making us reevaluate the difference between facts and truth in unprecedented ways.
Okay: Robin AI CEO Richard Robinson. Here we go.
This interview has been lightly edited for length and clarity.
Richard Robinson, founder and CEO of Robin AI. Great to have you here on Decoder.
Thanks for having me. I really appreciate it. It’s great to be here. I’m a big listener of the show.
We’ve spoken before. I’m going to be all over the place here, but I want to start off with Robin AI. We’re talking about AI in a lot of different ways nowadays. I started off my Decoder run with former Google employee Cassie Kozyrkov, talking to her about decision science.
But this is a specific application of artificial intelligence in an industry where there’s a lot of thinking going on, and there ought to be — the legal industry. Tell me, what is Robin AI? What’s the latest?
Well, we’re building an AI lawyer, and we’re starting by helping solve problems for businesses. Our goal is to essentially help businesses grow because one of the biggest impediments to business growth is not revenue, and not about managing your costs — it’s legal complexity. Legal problems can actually slow down businesses. So, we exist to solve those problems.
We’ve built a system that helps a business understand all of the laws and regulations that apply to them, and also all the commitments that they’ve made, their rights, their obligations, and their policies. We use AI to make it easy to understand that information and easy to use that information and ask questions about that information to solve legal problems. We call it legal intelligence. We’re taking the latest AI technologies to law school, and we’re giving them to the world’s biggest businesses to help them grow.
A year and a half ago, I talked to you, and your description was a lot heavier on contracts. But you said, “We’re heading in a direction where we’re going to be handling more than that.” It sounds like you’re more firmly in that direction now.
Yeah, that’s correct. We’ve always been limited by the technology that’s available. Before ChatGPT, we had very traditional AI models. Today we have, as you know, much more performant models, and that’s just allowed us to expand our ambition. You’re completely right, it’s not just about contracts anymore. It’s about policies, it’s about regulations, it’s about the different laws that apply to a business. We want to help them understand their entire legal landscape.
Give me a scenario here, a case study, on the sorts of things your customers are able to sort through using your technology. Recently, Robin amped up your presence on AWS Marketplace. So, there are a lot more types of companies that are going to be able to plug in Robin AI’s technology to all kinds of software and data that they have available.
So, case study, what’s the technology doing now? How is that kind of hyperscaler cloud platform potentially going to open up the possibilities for you?
We help solve concrete legal problems. A good example is that every day, people at our customers’ organizations want to know whether they’re doing something that’s compliant with their company policies. Those policies are uploaded to our platform, and anybody can just ask a question that historically would’ve gone to the legal or compliance teams. They can say, “I’ve been offered tickets to the Rangers game. Am I allowed to go under the company policy?” And we can use AI to intelligently answer that question.
Every day, businesses are signing contracts. That’s how they record pretty much all of their commercial transactions. Now, they can use AI to look back at their previous contracts, and it can help them answer questions about the new contract they’re being asked to sign. So, if you’re doing a deal with the Rangers and you worked with the Mets in the past, you might want to know what you negotiated that time. How did we get through this impasse last time? You can use the Robin platform to answer those questions.
I’ve got to go back to that Rangers game situation.
Sure.
Please tell me you’re going to be able to do away with that annoying corporate training about whether you can have the tickets or not. If that could be just a conversation with an AI instead of having to watch those videos, oh my goodness, all the money.
[Laughs] I’m trying my best. You’re hitting the nail on the head though. A lot of this stuff has caused a lot of pain for a lot of businesses, either through compliance and ethics training or long, sometimes dull courses. We can make that so much more interesting, so much more interactive, so much more real-time with AI technologies like Robin. We’re really working on it, and we’re helping solve a vast range of legal use cases that you once needed people to do.
Are you taking away the work of the junior lawyers? I’m throwing up a little bit of a straw man there, but how is it changing the work of the entry-level law student or intern who would’ve been doing the tedious stuff that AI can perhaps now do? Is there higher level work, or are they just getting used less? What are you seeing your customers do?
If a business had legal problems in the past, they would either send them to a law firm or they would try and handle them internally with their own legal team. With AI, they can handle more work internally, so they don’t have to send as much to their law firms as they used to. They now have this leverage to tackle what used to be quite difficult pieces of work. So, actually more work they can do themselves now instead of having to send it outside. Then, there are some buckets of work where you don’t need people at all. You can just rely on systems like Robin to answer those compliance questions.
You’re right, the work is shifting, no doubt about it. For the most part, AI can’t replicate. It’s not a whole job yet. It’s part of a job, if that makes sense. So, we’re not seeing anybody cut headcount from using our technologies, but we do think they have a much more efficient way to scale, and they’re reducing dependence on their law firms over time because they can do more in-house.
But how is it changing the work of the people who are still doing the thinking?
I think that AI goes first, basically, and that’s a big transformation. You see this in the coding space. I think they got ahead of adoption in the legal space, but we are fast catching up. If you talk to a lot of engineers who are using these coding platforms, they’ll tell you that they want the AI to write all of the code first, but they’re not necessarily going to hit enter and use that code in production. They’re going to check, they’re going to review, they’re going to question it, interrogate it, and redirect the model where they want it to go because these models still make mistakes.
Their hands are still on the driving wheel. It’s just that they’re doing it slightly differently. They have AI go first, and then people are being used to check. We make it easy for people to check our work with pretty much everything we do. We include pinpoint citations, references, and we explain where we got our answers from. So, the role of the junior or senior lawyer is now to say, “Use Robin first.” Then, their job is to make sure that it went correctly, that it’s been used in the right way.
How are you avoiding the hallucination issue? We’ve seen these mentions in the news of lawyers submitting briefs to a judge that include stuff that is completely made up. We hear about the ones that get caught. I imagine we don’t hear about the ones that don’t get caught.
I know those are different kinds of AI uses than what you’re doing with Robin AI, but there’s still got to be this concern in a fact-based, argument-based industry about hallucination.
Yeah, there is. It’s the number one question our customers ask. I do think it’s a big part of why you need specialist models for the legal domain. It’s a specialist subject area and a specialist domain. You need to have applications like Robin and people who are not just taking ChatGPT or Anthropic and doing nothing with it. You need to really optimize its capabilities for the domain.
To answer your question directly, we include citations with very clear links to everything the model does. So, every time we give an answer, you can quickly validate the underlying source material. That’s the first thing. The second thing is that we are working very hard to only rely on external, valid, authoritative data sources. We connect the model to specific sources of information that are legally verified, so that we know we’re referencing things you can rely on.
The third is that we’re educating our customers and reminding them that they’re still lawyers. I used to write cases for courts all the time — that was my job before I started Robin — and I knew that it was my responsibility to make sure every source I referenced was 100 percent correct. It doesn’t matter which tool you use to get there. It’s on you as a legal professional to validate your sources before you send them to a judge or even before you send them to your client. Some of this is about personal responsibility because AI is a tool. You can misuse it no matter what safeguards we put in place. We have to teach people to not rely exclusively on these things because they can lie confidently. You’re going to want to check for yourself.
Right now, all kinds of relationships and arrangements are getting renegotiated globally. Deals that made sense a couple of years ago perhaps don’t anymore because of expected tariffs or frayed relationships. I imagine certain companies are having to look back at the fine print and ask, “What exactly are our rights here? What’s our wiggle room? What can we do?”
Is that a major AI use case? How are you seeing language getting combed through, comparing how it was phrased 20 years ago to how it needs to be phrased now?
That’s exactly right. Any type of change in the world triggers people to want to look back at what they’ve signed up for. And you’re right, the most topical is the tariff reform, which is affecting every global business. People want to look back at their agreements. They want to know, “Can I get out of this deal? Is there a way I can exit this transaction?” They entered into it with an assumption about what it was going to cost, and those assumptions have changed. That’s very similar to what we saw during covid when people wanted to know if they could get out of these agreements given there’s an unexpected, huge pandemic happening. We’re seeing the same thing now, but this time we have AI to help us.
So, people are looking back at historic agreements. I think they’re realizing that they don’t always know where all their contracts even are. They don’t always know what’s inside them. They don’t know who’s responsible for them. So, there is work to do to make AI more effective, but we are absolutely seeing global business customers trying to understand what the regulatory landscape means for them. That’s going to happen every time there’s regulatory change. Every time there are new laws passed, it causes businesses and even governments to look back and think about what they signed up for.
I’ll give you another quick example. When Trump introduced his executive order relating to DEI at universities, a lot of universities in the United States needed to look back and ask, “What have we agreed to? What’s in some of our grant proposals? What’s in some of our legal documents? What’s in some of our employment contracts? Who are we engaging as consultants? Is that in danger given these executive orders?” We saw that as a big use case, too. So, permanent change is a reality for business, and AI is going to help us to navigate that.
What does the AWS Marketplace do for you?
I think it gives customers confidence that they can trust us. When businesses started to adopt the cloud, the biggest reason that adoption took time was concerns about security. Keeping its data secure is probably the single most important thing for a business. It’s a never event. You can’t ever let your data be insecure.
But businesses aren’t going to be able to build everything themselves if they want the benefit of AI. They are going to have to partner with experts and with startups like Robin AI. But they need confidence that when they do that, their most sensitive documents are going to be secure and protected. So, the AWS Marketplace, first and foremost, gives us a way to give our customers confidence that what we’ve done is robust and that our application is secure because AWS security vets all the applications that are hosted on the marketplace. It gives customers trust.
So, it’s like Costco, right? I’m not a business vendor or a software company like you are, but this sounds to me like shopping at Costco. There are certain guarantees. I know its reputation because I’m a member, right? It curates what it carries on the shelves and stands behind them.
So, if I have a problem, I can just take my receipt to the front desk and say, “Hey, I bought this here.” You’re saying it’s the same thing with these AI-driven capabilities in a cloud marketplace.
That’s right. You get to leverage the brand and the reputation of AWS, which is the biggest cloud provider in the world. The other thing you get, which you mentioned, is a seat at the table for the biggest grocery store in the world. It has lots of customers. A lot of businesses make commitments to spend with AWS, and they will choose vendors who are hosted on the AWS Marketplace first. So, it gives us a position in the shop window to help us advertise to customers. That’s really what the marketplace gives to Robin AI.
I want to take a step back and get a little philosophical. We got a little in the weeds with the enterprise stuff, but part of what’s happening here with AI — and in a way with legal — is we’re having to think differently about how we navigate the world.
It seems to me that the two steps at the core of this are how do we figure out what’s true, and how do we figure out what’s fair? You are a practitioner of debate — we’ll get to that in a bit, too. I’m not a professional debater, though I have been known to play one on TV. But figuring out what’s true is step one, right?
I think it is. It’s increasingly difficult because there are so many competing facts and so many communities where people will selectively choose their facts. But you’re right, you need to establish the reality and the core facts before you can really start making decisions and debating what you should be doing and what should happen next.
I do think AI helps with all of these things, but it can also make it more difficult. These technologies can be used for good and bad. It’s not obvious to me that we’re going to get closer to establishing the truth now that we have AI.
I think you’re touching on something interesting right off the bat, the difference between facts and truth.
Yes, that’s right. It’s very difficult to really get to the truth. Facts can be selectively chosen. I’ve seen spreadsheets and graphs that technically are factual, but they don’t really tell the truth. So, there’s a big gap there.
How does that play into the way we as a society should think about what AI does? AI systems are going out and training on data points that might be facts, but the way those facts, details, or data points get arranged ends up determining whether they’re telling us something true.
I think that’s right. I think that as a society, we need to use technology to enhance our collective goals. We shouldn’t just let technology run wild. That’s not to say that we should regulate these things because I’m generally quite against that. I think we should let innovation happen to the greatest extent reasonably possible, but as consumers, we have a say in how these systems work, how they’re designed, and how they’re deployed.
As it relates to the search for truth, the people who own and use these systems have grappled with these questions in the past. If you want to Google Search certain questions, like the racial disparity in IQ in the United States, you’re going to get a fairly curated answer. I think that in itself is a very dangerous, polarizing set of topics. We need to ask ourselves the same questions that we asked with the last generation of technologies, because that’s what it is.
AI is just a new way of delivering a lot of that information. It’s a more effective way in some ways. It’s going to do it in a more convincing and powerful way. So, it’s even more important that we ask ourselves, “How do we want information to be presented? How do we want to steer these systems so that they deliver truth and avoid bias?”
It’s a big reason why Elon Musk with Grok has taken such a different approach than Google took with Gemini. If you remember, the Gemini model famously had Black Nazis, and it refused to answer certain questions. It allegedly had some political bias. I think that was because Google was struggling to answer and resolve some of these difficult questions about how you make the models deliver truth, not just facts. It maybe hadn’t spent enough time parsing through how it wanted to do that.
I mean, Grok seems to be having its own issues.
[Laughs] It is.
It’s like people, right? Somebody who swings one way has trouble with certain things, and somebody who swings another way has trouble with other things. There’s the matter of facts, and then there’s what people are inclined to believe.
I’m getting closer to the debate issue here, but sometimes you have facts that you string together in a certain way, and it’s not exactly true but people really want to believe it, right? They embrace it. Then, sometimes you have truths that people completely want to dismiss. The quality of the information, the truth, or the confusion doesn’t necessarily correlate with how likely your audience will say, “Yeah, Richard’s right.”
How do we deal with that at a time when these models are designed to be convincing regardless of whether they’re stringing together the facts to create truth or whether they’re stringing together the facts to create something else?
I think that you observe confirmation bias throughout society with or without AI. People are searching for facts that confirm their prior beliefs. There’s something comforting to people about being told and validated that they were right. Regardless of the technology you use, the desire to feel like they’re correct is just a baseline for all human beings.
So, if you want to shape how people think or convince them of something that you know to be true, you have to start from the position that they’re not going to want to hear it if it’s incongruent with their prior beliefs. I think AI can make these things better, and it can make these things worse, right? AI is going to make it much easier for people who are looking for facts that back them up and validate what they already believe. It’s going to give you the world’s most efficient mechanism for delivering information of the type that you choose.
I don’t think all is lost because I also think that we have a new tool in our armory for people who are trying to provide truth, help change somebody’s perspective, or show them a new way. We have a new tool in our armory to do that, right? We have this incredible OpenAI research assistant called deep research that we never had before, which means we can start to deliver more compelling facts. We can get a better sense of what types of facts or examples are going to convince people. We can build better ads. We can make more convincing statements. We can road test buzzwords. We can be more creative because we have AI. Fundamentally, we’ve got a sparring partner that helps us to craft our message.
So, AI is basically going to make these things better and worse all at the same time. My hope is that the right side wins, that people in search of truth can be more compelling now that they’ve got a host of new tools available to them, but only if they learn how to use them. It’s not guaranteed that people will learn these new systems, but people like me and you can go out there and proselytize for the benefits and capabilities of these things.
But it feels like we’re at a magic show, right? The reason why many illusions work is because the audience gets primed to think one thing, and then a different thing happens. We’re being conditioned, and AI can be used to convince people of truth by understanding what they already believe and building a pathway. It can also be used to lead people astray by understanding what they already believe and adding breadcrumbs to make them believe whatever conspiracy theory may or may not be true.
How is it swinging right now? How does a product like the one Robin AI is putting out lead all of this in a better direction?
I think a lot of this comes down to validation. [OpenAI CEO] Sam Altman said something that I thought was really insightful. He said that the algorithms that power most of our social media platforms — X, Facebook, Instagram — are the first example of what AI practitioners call “misaligned AI at scale,” These are systems where the AI models are not actually helping achieve goals that are good for humanity.
The algorithms in these systems were there before ChatGPT, but they are using machine learning to work out what kind of content to surface.It turns out people are entertained by really outrageous, really extreme content. It just keeps their attention. I don’t think anybody would say that’s good for people and makes them better. It’s not nourishing. There are no nutrients in a lot of the content we’re getting served to us on these social media platforms, whether it’s politics, people squabbling, or culture wars. These systems have been giving us information that’s designed to get our attention, and that’s just not good for us. It’s not nutritious.
On the whole, we’re not doing very well in the battle to search for truth because the models haven’t actually been optimized to do that. They’ve been optimized to get our attention. I think you need platforms that find ways to combat that. So, to the question of how AI applications help combat this, I think it is by creating tools that help people validate the truth of something.
The most interesting example of this, at least in the popular social paradigm, is Community Notes, because they are a way for someone to say, “This isn’t true, this is false, or you’re not getting the whole picture here.” And it’s not edited by a shadowy editorial board. It’s generally crowdsourced. Wikipedia is another good example. These are systems where you’re basically using the wisdom of the crowds to validate or invalidate information.
In our context, we use citations. We’re saying don’t trust the model, test it. It’s going to give you an answer, but it’s also going to give you an easy way to check for yourself if we’re right or wrong. For me, this is the most interesting part of AI applications. It’s all well and good having capabilities, but as long as we know that they can be used for bad ends or can be inaccurate, we’re going to have to build countermeasures that make it easy for society to get what we want from them. I think Community Notes and citations are all children in the same family of trying to understand how these models truly work and are affecting us.
You’re leading me right to where I was hoping to go. Another child in that family is debate. Because to me, debate is gamified truth search, right? When you search for truth, you create these warring tribes and they assemble facts and fight each other. It’s like, “No, here’s my set of facts and here’s my argument that I’m making based on that.” Then it’s, “Okay, well, here’s mine. Here’s why yours are wrong.” “You forgot about this.”
This happens out in the public square, and then people can see and decide who wins, which is fun. But the payoff is that we’re smarter at the end. We should be, right?
We should be.
We get to sift through and pick apart these things, hopefully correctly if the teams have done their work. Do we need a new model of debate in the AI era? Should these models be debating each other? Should there be debates within them? Do they get scored in a way that helps us understand either the quality of the facts, the quality of the logic in which those facts have been strung together to come to a conclusion, or the quality of the analysis that was developed from that conclusion?
Is part of what we are trying to claw toward right now a way to gamify a search for truth and vetted analysis in this sea of data?
I think that’s what we should be doing. I’m not confident we are seeing that yet. Going back to what we said earlier, what we’ve observed over the last five or six years is people becoming … There’s less debate actually. People are in their communities, real or digital, and are getting their own facts. They’re actually not engaging with the other side. They’re not seeing the other side’s point of view. They’re getting the information that’s served to them. So, it’s almost the opposite of debate.
We need these systems to do a really robust job of surfacing all of the information that’s relevant and characterizing both sides, like you said. I think that’s really possible. For instance, I watched some of the presidential debates and the New York mayoral debate recently, which was really interesting. We now have AI systems that could give you a live fact check or a live alternative perspective during the debate. Wouldn’t that be great for society? Wouldn’t it be good if we could use AI to have more robust conversations in, like you say, the gamified search for truth? I think it can be done in a way that’s entertaining, engaging, and that ultimately drives more engagement than what we’ve had.
Let’s talk about how you got into debate. You grew up in an immigrant household where there were arguments all the time, and my sense is that debate paved your way into law. Tell me about the debate environment you grew up in and what that did for you intellectually.
My family was arguing all the time. We would gather round, watch the news together, and argue about every story. It really helped me to develop a level of independent thinking because there was no credit for just agreeing with someone else. You really had to have your own perspective. More than anything else, it encouraged me to think about what I was saying because you could get torn apart if you hadn’t really thought through what you had to say. And it made me value debate as a way to change minds as well, to help you find the right answer, to come to a conversation wanting to know the truth and not just wanting to win the argument.
For me, those are all skills that you observe in the law. Law is ambiguous. I think people think of the legal industry as being black and white, but the truth is almost all of the law is heavily debated. That’s basically what the Supreme Court is for. It’s to resolve ambiguity and debate. If there was no debate, we wouldn’t need all these judges and court systems. For me, it’s really shaped a lot of the way I think in a lot of my life. It’s why I think how AI is being used in social media is such an important issue for society because I can see very easily how it’s going to shape the way people think, the way people argue or don’t argue. And I can see the implications of that.
You coached an England debate team seven or eight years ago. How do you do that? How do you coach a team to debate more effectively, particularly at the individual level when you see the strengths and weaknesses of a person? And are there ways that you translate that into how you direct a team to build software?
I see the similarities between coaching the England team and running my business all the time. It still surprises me, to be honest. I think that when you’re coaching debate, the number one thing you’re trying to do is help people learn how to think because in the end, they’re going to have to be the ones who stand up and give a five or seven-minute speech in front of a room full of people with not a lot of time to prepare. When you do that, you’re going to have to think on your feet. You’re going to have to find a way to come up with arguments that you think are going to convince the people in the room.
For me, it was all about helping teach them that there’s two sides to every story, that beneath all of the information and facts, there’s normally some valuable principle at stake in every clash or issue that’s important. You want to try and tap into that emotion and conflict when you’re debating. You want to find a way to understand both sides because then you’ll be able to position your side best. You’ll know the strengths and weaknesses of what you want to say.
As the final thing, it was all about coaching individuals. Each person had a different challenge or different strengths, different things they needed to work on. Some people would speak too quickly. Some people were not confident speaking in big crowds. Some people were not good when they had too much time to think. You have to find a way to coach each individual to manage their weaknesses. And you have to bring the team together so that they’re more than the sum of their parts.
I see this challenge all the time when we’re building software, right? Number one, we’re dealing with systems that require different expertise. No one is good at everything that we do. We’ve got legal experts, researchers, engineers, and they all need to work together using their strengths and managing their weaknesses so that they’re more than the sum of their parts. So, that’s been a huge lesson that I apply today to help build Robin AI.
I would say as well, if we’re focusing on individuals, that at any given time, you really need to find a way to put people in the position where they can be in their flow state and do their best work, especially in a startup. It’s really hard being in a startup where you don’t have all the resources and you’re going up against people with way more resources than you. You basically need everybody at the top of their game. That means you’re going to have to coach individuals, not just collectively. That was a big lesson I took from working on debate.
Are people the wild card? When I see the procedural dramas or movies with lawyers and their closing arguments, very often understanding your own strengths as a communicator and your own impact in a room — understanding people’s mindsets, their body language — can be very important.
I’m not sure that we’re close to a time when AI is going to help us get that much better at dealing with people, at least at this stage. Maybe at dealing with facts, with huge, unstructured data sets, or with analyzing tons of video or images to identify faces. But I’m not sure we’re anywhere near it knowing how to respond, what to say, how to adjust our tone to reassure or convince someone. Are we?
No, I think you’re right. That in the moment, interpersonal communication is, at least today, something very human. You only get better at these things through practice. And they’re so real-time — knowing how to respond, knowing how to react, knowing how to adjust your tone, knowing how to read the room and to maybe change course. I don’t see how, at least today, AI is helping with that.
I think you can maybe think about that as in-game. Before and after the game, AI can be really powerful. People in my company will often use AI in advance of a one-to-one or in advance of a meeting where they know they want to bring something up, and they want some coaching on how they can land the point as well as possible.Maybe they’re concerned about something but they feel like they don’t know enough about the point, and they don’t want to come to the meeting ignorant. They’ll do their research in advance.
So, I think AI is helping before the fact. Then after the fact, we’re seeing people basically look at the game tape. All the meetings at Robin are recorded. We use AI systems to record all our meetings. The transcripts are produced, action items are produced, and summaries are produced. People are asking themselves, “How could I have run that meeting better? I feel like the conflict I had with this person didn’t go the way I wanted. What could I have done differently?” So, I think AI is helping there.
I’d say, as a final point, we have seen systems — and not much is written about these systems — that are extremely convincing one-on-one. There was a company called Character.AI, which was acquired by Google. What it did was build AI avatars that people could interact with, and it would sometimes license those avatars to different companies. We saw a huge surge in AI girlfriends. We saw a huge surge in AI for therapy. We’re seeing people have private, intimate conversations with AI. What Character.AI was really good at was learning from those interactions what would convince you. “What is it I need to say to you to make you change your mind or to make you do something I want?” And I think that’s a growing area of AI research that could easily go badly if it’s not managed.
I don’t know if you know the answer to this, but are AI boyfriends a thing?
[Laughs] I don’t know the answer.
I haven’t heard anything about AI boyfriends.
I’ve never heard anybody say, “AI boyfriends.”
I’ve never heard anything, and it makes me wonder why is it always an AI girlfriend?
I don’t know. I’ve never heard that phrase, you’re right.
Right? I’m a little disturbed that I never asked this question before. I was always like, “Oh yeah, there’s people out there getting AI girlfriends and there’s the movie Her.” There’s no movie called Him.
No.
Do they just not want to talk to us? Do they just not need that kind of validation? There’s something there, Richard.
There absolutely is. It’s a reminder that these systems reflect their creators to some extent. Like you said, it’s why there’s a movie Her. It’s why a lot of AI voices are female. It’s partly because they were made by men. I don’t say that to criticize them, but it’s a reflection of some of the bias involved in building these systems, as well as lots of other complex social problems.
They explain why we have prominent AI girlfriends, but I haven’t heard about many AI boyfriends, at least not yet. Although, there was a wife in a New York Times story, I think, who developed a relationship with ChatGPT. So, I think similar things do happen.
Let me try to bring this all together with you. What problems are we creating — that you can see already, perhaps — with the solutions that we’re bringing to bear? We’ve got this capability to analyze unstructured data, to come up with some answers more quickly, to give humans higher order work to do. I think we’ve talked about how there’s this whole human interaction realm that isn’t getting addressed as deeply by AI systems right now.
My observation as the father of a couple… is it Gen Z now if you’re under 20? They’re not getting as much of that high-quality, high-volume human interaction in their formative years as some previous generations did because there are so many different screens that have the opportunity to intercept that interaction. And they’re hungry for it.
But I wonder if they were models getting trained, they’re getting less data in the very area where humans need to be even sharper because the AI systems aren’t going to help us. Are we perhaps creating a new class of problems or overlooking some areas even as these brilliant systems are coming online?
We’re definitely creating new problems. This is true of all technology that’s significant. It’s going to solve a lot of problems, but it’s going to create new ones.
I’d point to three things with AI. Number one, we are creating more text, and a lot of it is not that useful. So, we’re generating a lot more content, for better or for worse. You’re seeing more blogs because it’s easy to write a blog now. You’re seeing more articles, more LinkedIn status updates, and more content online. Whether that’s good or bad, we are generating more things for people to read. What may happen is that people just read less because it’s harder to sift through the noise to find the signal, or they may rely more on the systems of information they’re used to to get that confirmation bias. So, I think that’s one area AI has not solved, at least today. Generating incremental text has gotten dramatically cheaper and easier than it ever was.
The second thing I’ve observed is that people are losing writing skills because you don’t have to write anymore, really. You don’t even need to tell ChatGPT in proper English. Your prompts can be quite badly constructed and it kind of works out what you’re trying to say. What I observe is that people’s ability to sit down and write something coherent, that takes you on a journey, is actually getting worse because of their dependence on these external systems. I think that’s very, very bad because to me, writing is deeply linked to thinking. In some ways, if you can’t write a cogent, sequential explanation of your thoughts, that tells me that your thinking might be quite muddled.
Jeff Bezos had a similar principle. He banned slide decks and insisted on a six-page memo because you can hide things in a slide deck, but you have to know what you’re talking about in a six-page memo. I think that’s a gap that’s emerging because you can depend on AI systems to write, and it can excuse people from thinking.
The final thing I would point to is that we are creating this crisis of validation. When you see something extraordinary online, I, by default, don’t necessarily believe it. Whatever it is, I just assume it might be fake. I’m not going to believe it until I’ve seen more corroboration and more validation. By default, I assume things aren’t true, and that’s pretty bad actually. It used to be that if I saw something, I would assume it’s true, and it’s kind of flipped the other way over the last five years.
So, I think AI has definitely created that new problem. But like we talked about earlier, I think there are ways you can use technology to help combat that and to fight back. I’m just not seeing too many of those capabilities at scale in the world yet.
You’re a news podcaster’s dream interview. I want to know if this is conscious or trained. You tend to answer with three points that are highly organized. You’ll give the headline and then you’ll give the facts, and then you’ll analyze the facts with “point one,” “point two,” and “finally.” It’s very well-structured and you’re not too wordy or lengthy in it. Is that the debater in you?
[Laughs] Yes. I can’t take any credit for that one.
Do you have to think about it anymore or do the answers just come through that way for you?
I do have to think about it, but if you do it enough, it does become second nature. I would say that whenever I’m speaking to someone like you, who in these types of settings, I think a lot more. The pressure’s on and you get very nervous, but it does help you. It goes back to what I was saying about writing, it’s a way of thinking. You’ve got to have structured thoughts, and to take all the ideas in your mind and hopefully communicate them in an organized way so it’s easy for the audience to learn. That’s a big part of what debating teaches.
You’re a master at it. I almost didn’t pick up on it. You don’t want them to feel like you’re writing them a book report in every answer, and you’re very good at answering naturally at the same time. I was like, “Man, this is well organized.” He always knows what his final point is. I love that. I’m kind of like a drunken master in my speech.
Yes. I know exactly what you mean.
There’s not a lot of obvious form there, so I appreciate it when I see it. Richard Robinson, founder and CEO of Robin AI, using AI to really ramp up productivity in the legal industry and hopefully get us to more facts and fairness. We’ll see if we reach a new era of gamified debate, which you know well. I appreciate you joining me for this episode of Decoder.
Thank you very, very much for having me.
Questions or comments about this episode? Hit us up at decoder@theverge.com. We really do read every email!
]]>Hello, and welcome to Decoder! This is Jon Fortt, CNBC journalist, cohost of Closing Bell: Overtime, and creator of the Fortt Knox streaming series on LinkedIn. I’m guest-hosting for a couple more episodes of Decoder this summer while Nilay is out on parental leave.
Today, I’m talking with a very special guest: Gil Duran, an old friend, journalist, and author of The Nerd Reich, a newsletter and forthcoming book about the shifting politics of Silicon Valley and the rise of tech authoritarianism.
I’ve known Gil for a long time. We met at the end of high school and went to college together, and we were also colleagues at the San Jose Mercury News. Gil has had a fascinating career that spans both media and politics: he’s worked as press secretary and comms director for high-profile California politicians like Gov. Jerry Brown and Sen. Dianne Feinstein, and he also advised Kamala Harris when she served as California’s attorney general.
Listen to Decoder, a show hosted by The Verge’s Nilay Patel about big ideas — and other problems. Subscribe here!
Now, writing The Nerd Reich, Gil is focused on a new type of story, one he says has gone woefully under-covered by mainstream media. That story is the influence of tech money on politics and society at large, and the disturbing philosophical undercurrents that are driving it.
The “Nerd Reich,” as Gil sees it, is a web of powerful, ultrawealthy tech billionaires. People like Peter Thiel, Elon Musk, Marc Andreessen, and others, whose politics and influence now see them pushing the country further and further away from democracy and toward something resembling a kind of cross between unrestrained capitalism and monarchy.
This idea has been kicking around for quite a while now. You’ll hear Gil refer to it as the Dark Enlightenment, or as some refer to it, the neo-reactionary movement. Some central characters here include Curtis Yarvin — an influential, anti-democracy blogger whose ideas once stood far outside mainstream acceptability, but who recently has captured the attention of politicians like Vice President JD Vance.
And that’s Gil’s central thesis: while these ideas are not new, their embrace by some of the wealthiest and most powerful people on the planet is a relatively recent phenomenon — one that’s been supercharged by President Donald Trump’s reelection.
Now that these ideas have entered the White House by way of the MAGA movement, Gil argues that it has created a dangerous coalition between the far right and the stewards of the biggest, most popular tech platforms and products. After all, as we’ve seen with Elon Musk and DOGE, these tech billionaires aren’t just sitting in the shadows; they want to tear down and rebuild the government from the ground up.
Gil is one of the sharpest thinkers on this subject, and he never shies away from saying what he really thinks. So I think you’ll find this conversation very illuminating; I know I did.
Okay: The Nerd Reich author Gil Duran. Here we go.
This interview has been lightly edited for length and clarity.
Gil Duran, great to have you here on Decoder.
Thanks for having me.
Well, we’re going to talk about The Nerd Reich, of course, because that’s what you write, what you podcast, and what you do. But first, a disclosure: You and I first met 31 years ago as high school seniors. We received the same journalism scholarship, went to the same college, and we’re friends.
We started our journalism careers. You focused more on culture, government, and politics. I focused more on business and tech. You’ve gone on to a brilliant and wide-ranging career. You’ve run communications for a who’s who of California politics: Jerry Brown, Kamala Harris, Antonio Villaraigosa, Dianne Feinstein.
But that’s a bit of a past life for you. At this moment, our worlds collide. The government and culture stuff, and the business and tech stuff. So, what is the Nerd Reich?
The Nerd Reich is a term that some people use to describe a cultish group of tech billionaires who basically seek to replace democracy with something resembling corporate dictatorship. Some people call this movement the Dark Enlightenment, the neo-reactionary movement, or the network state.
It’s backed by a handful of CEOs and billionaires: people like venture capitalist Marc Andreessen and Coinbase CEO Brian Armstrong, with involvement from people like OpenAI CEO Sam Altman, and the granddaddy of them all, Peter Thiel, who’s been promoting some of these ideas for decades.
What’s wrong with it?
Well, I say it’s inherently anti-American. It sees a post-United States world where, instead of democracy, we will have basically tech feudalism — fiefdoms run by tech corporations. They’re pretty explicit about this point. You and I did Poli Sci 101 together at DePauw University. I would say that there is a conversation in academia about the long-term health of the nation state in the 21st century, and these guys are tapping into that and proposing a product or a model that will put them at the supreme head of world government in the future.
But I think before we go about trying to change the nation or change the nation state, we should probably discuss that idea with the American people. So, in a nutshell, you have a group of super-rich elites with a very apocalyptic vision of where society and the world are headed, and they are rushing ahead with what they think is the solution, a solution that, by the way, will also put a crown on their heads.
This reminds me of some themes from a book I read in high school, by author Ayn Rand, called The Fountainhead, and I think you and I might’ve talked about this during freshman year in college as well. It’s also in line with another Rand book, Atlas Shrugged.
It’s very convincing when you’re reading it as a teenager and maybe also as a billionaire, this idea that, “Hey, there are some people out there who are just more productive as capitalists, and capitalism is good. It’s the lifeblood of society, and these are the people we need to be running things, not these altruistic, mealy-mouthed, progressive people who are just watering everything down and making everything mediocre.”
It’s definitely an ideology of tech supremacy. This idea that if you have billions of dollars and you’ve created some tech product that’s valuable, that makes you good at everything. I think there’s an old Yiddish saying, “If you’re rich, you’re also handsome and you can sing.” So, it’s this idea that because you’re rich, you can now do everything. We see this Dunning-Kruger approach with everyone from Elon Musk to Jeff Bezos going into businesses where they have no experience and making a mess of things. So, they’re trying to do that basic idea with governance.
You’re right, a lot of people trace this back to ideas like Galt’s Gulch [in Atlas Shrugged]. It’s an idea we find throughout science fiction, with enclaves of tech elites controlling everything. Usually, they’re the bad guys. For the life of me, I can’t understand why these guys have decided to overtly be the bad guys in science fiction.
But these are ideas that really collapse under the weight of reality because governing and getting the consent of the governed is a long-standing historical problem. The best we’ve gotten in our thousands of years is figuring out something like the democracy we have right now and the idea that we’re just going to replace it with these corporate fiefdoms… there’s a lot they haven’t thought through, and it becomes very obvious the moment you start probing beneath the surface.
Now, one of the leading thinkers on the tech billionaire side of this is a guy named Curtis Yarvin. He recently had a debate at Harvard, and he seems to have done pretty well. How would you frame how Yarvin approaches these things?
Yarvin is a computer programmer and a pseudo-intellectual who, in the early 2000s, started inventing his own theory of politics, largely catering to the idea that instead of a democracy, we need a dictatorship. That we’d be better off with a monarchy and going into great detail about how to create this new system, which involved breaking up the nation state into smaller territories, he called patchworks, which would then be governed by totalitarian corporations. For example, he envisioned a San Francisco in the future that would be called Frisk Corp, run by a corporation called Frisk Corp, where everybody would be under constant and total surveillance even in the privacy of their own homes.
This is what would ensure your security, and you’d need to swipe in and swipe out to get in or out of the city. The government of the city would have total power over you. They could kill you if you want. You’d have no rights. The only thing you’d be able to do is to leave, to vote with your feet, which is underestimating how authoritarian governments work. Because if everybody could just leave, people would just leave North Korea, China, all these countries. They don’t do that for a reason — because they’re not allowed to.
Or maybe it’s Singapore.
Well, that’s one idea it’s become. So, a few years later, Balaji Srinivasan, who’s the former CTO of Coinbase and a former Andreessen Horowitz partner and a friend of both Yarvin and Peter Thiel, and that’s an important part… Peter Thiel has a longstanding association with Yarvin, funded his company for years, and has named him in conversations as an inspiration. He’s considered Peter Thiel’s house philosopher.
Well, Srinivasan sees that the association with Yarvin makes it creepy because he has some weird ideas about what we do with poor people, and a lot of genocidal language occurs through the writings of Yarvin. So, Balaji Srinivasan tries to update it to something called The Network State and puts out this whole book where he basically tries to rebrand it as a corporate safe idea. We have to start thinking about sovereignty, but he also has some pretty nutty ideas about how that would look.
So I guess if I were to take the other side, which I have to do to keep it interesting, one could say that Yarvin’s ideas aren’t so far from, say, Alexander Hamilton’s? And the dynamic between Hamilton and Thomas Jefferson was a healthy dynamic in the formation of American ideas.
Hamilton was accused of being a monarchist. He was pretty clearly in favor of smart and rich people having a lot more power and say over how things went than everybody else. Thomas Jefferson, despite his many contradictions and hypocrisies, was the man of the people, the pro-democracy guy.
Well, I think Yarvin would like that comparison. I don’t think he deserves it. You’ve basically got a failed startup founder — his product has never really done what it was supposed to do. If you can’t create that world, how can you deal with the rest of it? I’ve actually been in government at City Hall, in the state capital in Sacramento. There are some hard decisions, some complex issues, that don’t lend themselves to this simplistic thinking of a bunch of guys who spend all their lives with their necks craned over their computers in the code. That’s what they just don’t get. That’s the missing element of their ideas.
Yarvin is not a PhD in anything, in history. If you talk to political scientists and historians as I have been doing, because I’m writing a book on this subject, it’s almost an insult to bring up his ideas to them because they make no sense, and they immediately go deep into all the philosophers, thinkers, and historians who have debunked the basic ideas here.
So, they’re operating at a basic level, almost like high schoolers arguing, or us maybe freshman year arguing over Galt’s Gulch. Or remember we had some weird debates over stuff that neither of us really knew about, but we were really going to be right about it. That’s what they’re doing, except they’re grown men. The problem is when you get these billionaires who maybe haven’t matured as fast as their finances have, they think these are great ideas, and they push them, and then unfortunately, the rest of us have to deal with them.
At the root of it, we’re at a time where the broader public, at least big segments of the broader public, seem to have lost faith in the ideas of expertise and of institutions. So, you say he doesn’t have a PhD. There are a lot of people who are like, “Yay, no PhD. He didn’t get a PhD in this from Harvard. Boo, Harvard.”
What’s going on with perhaps the framing of the American promise, where we’re now in a time when people are lionizing those who have made a lot of money over those who have gained knowledge or experience in a given area? It seems like the result of some failed promise.
Well, definitely. We have a media that largely glamorizes the wealthy and makes them seem like they’re better than everybody else. I think we’ve got a longstanding culture in which wealth is seen as proof that you’re better, that you’re more hardworking, that you’re morally superior. So, there’s a lot flowing toward that.
I mean, our president of the United States is a guy who played the stereotype of a rich guy on TV in the ’80s and the ’90s, right? Donald Trump has always been there with this gold-plated, kitschy image that he projects. But I think it’s starting to fall short in some ways. I think when the effects of these tariffs hit, Trump’s poll numbers started sinking, and people are learning in their own ways that great wealth doesn’t equal wisdom, and it doesn’t equal leadership.
Trump has been able to get pretty far on an illusion, but there’s only one Donald Trump. Whether you love him or hate him, there’s no denying that he’s had a longstanding charismatic relationship with the American people. You don’t get that with Elon Musk. You don’t get that with Peter Thiel, who can barely choke out a sentence that’s comprehensible. You certainly don’t get that with Curtis Yarvin. If I had a big budget, I would definitely put ads targeting MAGA, showing them the stuff these guys are saying, the stuff these guys are talking about, because these guys have nothing in common with your red hat-wearing Republican. They look down on those people, too.
Unfortunately, the only person making this point is someone I also despise politically, Steve Bannon. He has been telling people about this stuff — about transhumanism and the network state and all these weird ideas that they plan to impose on people. So, I think the bigger problem is that we haven’t really had a discussion about any of this stuff because, largely, there’s been a media blackout on it. I think editors think it’s too weird or esoteric.
Now, it’s being shoved in our face more and more with every passing month, and we’re way past the point of being able to ignore it. I mean, I’m surprised The New Yorker covered this before The New York Times or The Washington Post. I was writing stuff last year that they’re just getting around to now, and believe me, as happy as I am to be a freelance writer who found an important story, I shouldn’t have been the guy talking about this last year. Last year was when the American people deserved to know about this stuff.
I think we might be in a place where certain publications are trying not to seem weird and, in the process, perhaps not covering certain things. But Yarvin would claim — as a matter of fact, not would, he does — that the FDR presidency is the model for what America needs now, right? He talks about the amount of power that FDR concentrated. FDR, coming out of the Gilded Age, had a lot of power and made a lot of decisions that one would easily argue were important. Is that wrong?
Well, he mischaracterizes FDR as a monarch or a dictator, which is not true because FDR was elected. FDR was beloved, and after FDR left, someone else took his place who was not a member of his family. After that, a few years later, someone from the opposite party was in charge. So, that’s how the American government works.
But here’s the clue for why Yarvin is obsessed with FDR. FDR was an emergency wartime president. He was president during the Great Depression and WWII, a time of great crisis. Because of that crisis, he was given more power to make things happen quickly in defense of the nation. And now we see what? Donald Trump trying to declare emergencies everywhere in order to get more power, without having to go through Congress, and without having to go through the usual checks and balances. So, I really think the key there is to understand the degree to which they see the use of emergency powers as the easiest way to expand the power of the executives.
I wrote a piece at The Nerd Reich about a startup here in California that is proposing that Donald Trump declare a national security emergency to allow them to build a little network state tech hub on the island of Alameda, on a former naval base where there happen to be endangered migratory bird habitats. Everywhere we look, even this ridiculous example today, we’re seeing this desire for an emergency, which is a way to grab power. So, I think that’s the key to understanding Yarvin’s slanderous obsession with FDR: the emergency powers he was granted.
That idea of an emergency is, I think, a thread throughout law, right? But emergencies are hard to define. The one recent example is the situation in LA around these protests. Were they mostly peaceful protests? Or were they riots and insurrections? It just depends on whose social media feed you’re looking at, or what news channel you happen to watch the most, and how that gets framed. So, an emergency, in a way, is in the eye of the beholder and in the context of where the beholder thinks the country’s heading.
Well, I guess to some degree, when you look at different feeds, you’re seeing different things. But in California, when there’s an emergency, the governor has the power to act. The big thing with LA was that the mayor and the governor both made it clear that they had the situation under control. The LAPD and the LA Sheriff’s Department were handling the situation. If someone needs to call on the National Guard, that’s the governor’s job. The governor called in the National Guard back in 2020 when there were several days of destruction and violence in Sacramento. So, this was Trump seizing on the perception created by Fox News and Elon Musk’s X to insert himself as the hero of an emergency that just didn’t exist.
Yes, there were protests. Yes, a small portion of them had some property destruction and vandalism, but no one involved in the situation needed Donald Trump to interfere. He politicized the situation and inserted himself, and what he did actually was try to create a crisis because he knew that would create more of a backlash from people in California. More than anything right now, Donald Trump wants confrontation. So, like with Silicon Valley, they’re always crying about a crisis that they’re actually rushing to create at the same time. AGI will kill us all, let’s rush to create it. This is like a strange mentality, but only we can solve the emergency that we create is what seems to be the logic they apply there.
Another thing I find interesting about the FDR argument that Yarvin makes is that FDR started social safety nets, big government expansion, massive economic stimulus, and labor rights support. He arguably laid the groundwork for the Great Society. That doesn’t sound like what this crowd wants to repeat.
No, they want to use the powers of a dictatorship to go in the opposite direction. A lot of this is about pulling out of the social contract, and a lot of these ideas can be traced back to a 1997 book called The Sovereign Individual, which predicted that in the 21st century, the coming information age would eliminate most jobs and that this would lead to violence and chaos and societal degradation as people no longer had money and could no longer afford anything. But a new technology called cybercurrency would allow a certain cognitive elite, people who can become wealthy off of this new information age, to rise in power and create their own little fortress societies where they would be safe as everything fell apart outside of the walls.
We have a bunch of CEOs telling us that AI is going to get rid of millions and millions of jobs. Well, what’s going to happen to those people who can no longer work? What is their future? What is the future of their children? They say, “Well, AI will create other jobs.” Well, what I’m hearing is that AI will just continuously improve and take away the jobs it creates. There’s this thing that doesn’t quite add up, and some of them talk about universal basic income and everyone getting a share of the profits. Well, that sounds a lot like Silicon Valley socialism.
What does freedom look like in Sam Altman’s universal basic income universe? What does democracy look like when you don’t get to eat unless someone like Elon Musk is approving of your existence? There’s a question there that these guys hint at, but never answer. I think that’s a place where you have to go in politics. We have to talk about what the future looks like if they’re going to kill all jobs.
So what’s the solution then? Because short of regulating AI out of existence… If indeed we are heading toward [artificial general intelligence], and this super smart software that eliminates a bunch of jobs — and very often that isn’t what ends up happening — it seems like we were having this same conversation 20 or 25 years ago about the internet.
In short, the internet did eliminate encyclopedia salesman jobs, but people are still necessary in the loop. I mean, as the US somehow tries to stop the development of this technology elsewhere, there are technology companies and smart people at these companies who are going to continue working on it.
Well, I think you need a smart approach to regulation. Unfortunately, what we have now is an effort to ban all regulation because these guys have captured the White House with Trump. I think that it’s hard to distinguish between the hype of [artificial general intelligence] and the very real harms of AI, which will come in a simpler form: the bias we already see coming out of these companies in their algorithms and the way it’ll be used to exacerbate existing deficiencies in our society. I think the bigger problem is that we’re learning that if you give people too much money, they go crazy.
Some of them go crazy and decide they want to end democracy and overthrow the United States of America and live in some dystopian science fiction fantasy world. When you have that much money, you don’t just talk about it, you take steps to make it happen. How we deal with that problem is a very serious question that I don’t think anybody in politics, Democrat or Republican, wants to answer because they’re so dependent on these people for campaign contributions. We need someone like FDR to be a traitor to his class and rise up and find a way to put these billionaires back in their place.
Well, to go back to the FDR idea here, what if this is part of the natural rhythm of the American climate? That when things go too extreme in one direction, you had the robber barons and all of that, the roaring ’20s and their extremes, there needs to be or ends up being some consolidation of power on the other side that swings the pendulum back. Not that those ideas are right, not that everything FDR did was completely sustainable, or the way that he framed it.
But what if it was just necessary because of the excesses of the rich that built up on that side, and now what these guys and their crazy ideas represent is merely a pendulum swing in the other direction?
I think that’s plausible. I think studying these guys, one thing that I find frightening is that they have studied those periods of history. They understand what happened in the Gilded Age. Balaji Srinivasan name-checks Ida Tarbell as a major enemy. He’s still mad about what she did to Standard Oil, and she’s been dead since 1944. So, I think they’re looking for a way to end the game. They have enough wealth accruing now where we have robber barons who are richer than maybe at any other point in history, all these guys with infinite money, and now they’re creating their own forms of money.
So, I think they’re looking for a way to end the game, and that’s why they’re teamed up with MAGA, because here’s a president who’s willing to sell to the highest bidder and who’s completely testing the law, the constitution, and the limitations on executive power. I have no doubt that if Donald Trump tomorrow declared himself the permanent leader of the United States and said democracy is over, that we would have applause and silence from almost all of Silicon Valley.
That’s a different look than Silicon Valley elite executives displayed 25 years ago when you and I were pretty fresh out of school and heading over there. I mean, is this what we’re seeing, the expansion of Atherton?
For people who aren’t familiar with the Bay Area, Atherton is outside of Palo Alto. A lot of CEOs and venture capitalists live there, and it’s the weirdest place in the Bay Area I’ve ever been because you go there and there are these streets and hedges that are 15 feet tall. You can barely see any houses. In some areas, you can’t see any houses at all because that’s the idea. You’re not supposed to see anybody’s house, but super-rich people live there. If they invite you and they open the gate behind the hedges, you can go in. They’re sending their kids to private schools. They’re in these houses that you can’t see. They’re living very wealthy lives, and you see mostly pickup trucks from people who are coming to service the properties on the streets.
That idea seems to have expanded in recent years, even beyond Atherton, where 25 years ago — and not to lionize this guy — but Steve Jobs was living in Palo Alto, sending his kids to public school, and local families trick-or-treated at his house. It’s very different from the Atherton mindset that seems to have expanded lately.
Definitely. I think we’ve had a series of crises in our society that have radicalized some of these guys, as has the tremendous expansion of wealth through things like crypto in recent years. Plus, we’ve had a social media hit. So, you had the financial crisis in 2008, which showed people that everything’s much shakier than we thought. We had the rise of social media and a social media president, Donald Trump, who completely disrupted politics. We had the MeToo Movement, which led to a lot more public awareness and sensitivity around certain structures of oppression. That pissed off a lot of people who felt pretty powerful and rich and without problems before that.
Then we had the pandemic, which completely made us all work together for a while and created some divisions over things like vaccines and public health safety measures. So, I think we’ve lived through a period where our tech CEOs and billionaires no longer feel comfortable just being a part of the system and trying to find a way to work within it. They’ve decided that everything is ripe for disruption, including the United States of America, including liberal democracy, and that it is their destiny to overthrow it or change it and create a system that answers to them. I think that’s what the problem is.
I do remember that 20 years ago, living in San Francisco, some people who had been there a while were really disdainful of the techies and saw them as a threat, as this thing that was going to bring bad ideas and change to the town. I thought they were exaggerating. I didn’t think it was that bad. It turns out, actually, it is that bad.
Well, interesting. Let’s visit that for a moment because the national popular image of San Francisco is a place where needles are being passed out and homeless encampments are spreading throughout the city. There’s little attention to the actual mechanics of day-to-day life.
At the same time, the way I’ve experienced San Francisco over the past 20-plus years is that there’s been this enormous investment in office space and in the downtown to the exclusion of the livability of actual individuals’ families. It was to the point where when the pandemic hit and companies shut down, we went to work from home, and San Francisco became a ghost town because there had been so much emphasis placed on office space versus people actually living there. What do you think is the truth of what San Francisco is now, what it has become, and who it’s serving?
Well, San Francisco is the best and most beautiful city in the United States. I mean, it’s an amazing place. There are all kinds of things happening, and there are areas of town that are bad. If you didn’t grow up in the United States, you may not know that there’s always a bad part of town in this country because there are poor people. When you tend to put them all in one neighborhood — and I grew up in a poor neighborhood, so I know – but if you go to my hometown, Tulare, there’s a nice part where people have swimming pools and big lawns. And there’s the part I’m from, which is not a place you want to go hanging around at night. So, that’s normal.
This is a few blocks in San Francisco where the homelessness and open drug addiction have gotten out of hand, and that’s a problem that needs to be dealt with, hopefully in a rational, evidence-based way. But what’s happened is that there’s been this Fox News-driven slander campaign to make it seem like that’s all of San Francisco, when you can walk three blocks from the epicenter of this zone and get like a $20 Japanese-style cocktail, right? It’s really the haves and the have-nots. It’s a tale of two cities, and most of the city is expensive, beautiful, wonderful, glorious, full of culture and food, et cetera. So, that part of the story is very fictional, but people have been priced out. That’s a story all across California and all across many parts of the nation.
Wages aren’t keeping up in a way that allows people to continue to live in this country. A recent study showed that 60 percent of Americans have trouble making ends meet and affording the basics. That’s the problem we’ve got to solve. So, these tech billionaires always want to scream about a few blocks in San Francisco where there’s fentanyl addiction. Well, guess what’s fueling the fentanyl trade more than anything? Crypto. So, let’s ban crypto as a first step toward solving the drug crisis, but they don’t want to do that.
So, they really are good at directing attention at a scapegoat and away from the core problem, which is that we have an increasing number of billionaires, an increasing number of people living in tents, and a society that is so vastly unequal that it is headed toward a collapse one way or another, but that’ll just mean a reckoning. I think it’s going to be a reckoning with the role that these billionaires play in our society and with the role that tech and Silicon Valley play in our society. Downtown San Francisco is empty because of Zoom, because you can work from home now. I worked in the financial district for 10 years, and that’s not where you see homelessness and drug addiction. There are a few homeless people, but that’s about a mile away.
The problem with downtown is that people can work from home now. So, all the restaurants that were there for years serving lunch are gone, and the offices are empty, and it’s dead. But if you go to the neighborhoods where people are now working, where they live, everything’s booming because that’s where things have moved to. So, we’ve seen this migration out of office space. It turns out we didn’t need it because technology disrupted it, and people would rather not go to the office every day. The stories make it seem like the reason downtown is empty is because of the homeless people, and that’s a completely fabricated disinformation propaganda narrative.
Those are two separate problems within a few miles of each other, but the way they get conflated by many in Silicon Valley and by Fox, Newsmax, and by the president of the United States gives people a false impression of what’s happening.
So we might just be in this transition period, going back to this idea of climate versus weather, where the cities were designed for an 1980s or 1990s reality in how people go throughout their day, and also work, play, and live. But some of the fundamental mechanics of that have changed because technology and people are going to work, play, and live differently. That means we’ve got a bunch of infrastructure sitting in various places that just isn’t as necessary as it used to be, and then you end up with problems when you’ve got setups like that.
Definitely. For thousands and thousands of years, human beings have figured out a way to shelter themselves. It’s amazing that we can’t figure out a way to provide shelter for the people we have here. Part of the problem in downtown San Francisco is that it’s not so easy to convert all of these office buildings into housing. So, it would be expensive to have to tear them down or convert them, or completely reconfigure them. There are some office buildings that are being reconfigured into housing, but of course, that’s probably going to be housing not for poor people, but for wealthier people who can afford to live in a redone old office building.
I do think we’re at a point where we have to figure out, with all the technologies we have and all of the future that we see right in our face, why can’t we solve the basic old problems? Unfortunately, there’s a political disconnect there where the people who say they want to solve these problems only want to solve them in a certain way. Then there’s the political disagreement. For instance, there’s this book, Abundance, that came out and that everybody’s fighting about all the time. Oh, sure, Abundance sounds great. It’s a good word. We should have abundance. When you get down into details, though, there’s no rational policy you can propose where everybody’s just going to agree because it’s rational. Politics is not based on rationality.
It’s based on all of these other factors: emotion, identity, and morality. So, even if we had housing for everybody, someone would object because you have to work to get into the housing. Actually, here in San Francisco, there’s been a movement to keep people from getting housing if they don’t stop doing drugs first. Well, the evidence and the data show that you want to get people into housing and not create more barriers, and then try to get them off of the drugs. But some people have a moral block against allowing people to get something while they’re doing drugs. It’s completely unscientific. The data proves conclusively that it’s wrong, but all these tech CEOs here are pro “you have to get the treatment or you don’t get the housing.”
Well, if you couldn’t get government contracts if you were on drugs, that would be interesting.
[Laughs] Yeah, well, the rules only apply to some people. Drug tests are for the little people.
Are we in a situation where technology and democracy are fundamentally at odds? Are we in a cotton gin moment in a way? Because with every new technology, somebody tries to frame it like this is the solution to inequality — the digital divide, we’re going to solve that. We didn’t solve the digital divide. As we saw during COVID, kids who had means and were home from school actually tended to do better in isolation. Kids who were in even really high-powered charter school-type programs like KIPP did far worse.
It debunked the argument that these charter school programs, like KIPP, are cherry-picking the most promising kids out of the inner city. And the reason why they do well is because they would’ve done well anyway, because a lot of those kids really had a lot of trouble when they were disconnected from that structured environment and from intense attention from teachers, academic preparedness, et cetera. But it’s happening again now with AI.
So, I really wonder if we are in this march of technological progress, but if technology and democracy, fairness, and economic opportunity are fundamentally at odds. Is there any way to shift that equation that you see?
Well, social justice and economic justice would be considered woke now by this new generation of CEOs, and they think they just defeated all of those ideas. Now, we can go into a meritocracy future where you only get things if you can compete at a very high level, according to rules created by these Silicon Valley guys. I think that a big part of the problem is who is creating this technology and who are they creating it for? I don’t necessarily buy the hype on AGI and AI, but the CEOs are pushing this idea that they’re about to massively transform and disrupt the world in a way that sounds like it’s going to maybe harm the majority of people to benefit a small number of people.
Why is that the case? Why are they designing it in that way? Why aren’t they trying to find ways to create technologies that can solve the problems we have, rather than create new, worse problems? I think a fundamental problem is that these technologies are now being designed by people in the private sector with nothing more than a profit motive. Whereas in the past, some of the biggest, most transformative technologies we have come out of government for the public good, for national security, for some other incentive, to try to solve the problem in a different way. So, I don’t know how you can fix that problem as long as we’re going to let a handful of extremely wealthy megalomaniacs guide the progress.
But you don’t think that the mindset of this handful of people who you’ve highlighted as being part of the Nerd Reich is the dominant mindset in Silicon Valley? There seems to be a range. I talked to a lot of people in Silicon Valley, a lot of CEOs and founders, and very few of them seem to be strongly aligned with this libertarian objectivist group. Some are like, “Oh, this’ll pass. I didn’t like the Biden policies. So, maybe this is a bit better.”
Some are actively against the direction that this administration, this government, is going, but hey, they have a business to run. So, they’re trying to keep their heads down. Isn’t there the idea that some of these people actually could do or try, or support something different?
Well, it depends on who wins. Silence is complicity, and people sometimes get mad. They say, “Don’t say all tech, don’t say all Silicon Valley.” Well, where are all the techies in Silicon Valley, the CEOs, speaking out against this stuff? Where are the people standing up and saying, “You know what? I’m going to put my money against this. This is morally wrong. This is repugnant. We should do it another way”? Silicon Valley, per capita, I think, has the highest collection of cowardly CEOs in the world. They all want to hide under a rock and be on whatever side wins. That’s what I see. I don’t see one really speaking out.
That’s a frequent criticism in this administration of the likes of Jeff Bezos, who owns The Washington Post, and that paper’s slogan, “Democracy dies in darkness.” Now, I wonder, is that a prophecy, or was it a warning? Was it a prediction? He seems to have taken a different slant on trying not to be in direct opposition to the Trump administration, no?
Well, definitely, he wants to keep all of his government contracts, and he’s gutted an editorial board that would’ve been a big voice in a time like this for speaking as a conscience of the nation. He has some childish idea that he can just be for freedom and economic markets without those being political. I think Bezos is totally going for self-interest. Like I said, I think if Trump declared that democracy was over, you wouldn’t hear a peep from The Washington Post or from Jeff Bezos. They just want to be on the side that’s winning, to steal a phrase from Bob Dylan. That to me is the scariest part about all of it.
Growing up in this country, I grew up in a very conservative place. I think when I was really young, I had some Republican leanings just because of what I was surrounded by. It was a very patriotic town, and a lot of people in my family enlisted in the military for no particular reason. It was considered just a thing to do. You serve your country.
It’s amazing to grow up and to realize, as I head toward 50, that so many people just don’t believe in anything, don’t believe in this country, and don’t really care what happens to it as long as they get theirs. That is just not the way I ever thought it would be. That has been a bit of a surprise to me. If you can’t have a little bit of courage when you have billions and billions of dollars, I think the value of that money is suspect. I think it’s going to take Americans who are not billionaires to save this country, and then we’re going to have a lot of questions for all these rich people who sat around just watching it burn.
We’ve talked about the collapse or weakening of many institutions. You’ve worked for big media companies, for California politicians, and now you’re independent. So, in a way, perhaps the disillusionment with institutions and their ability to execute on truth or move the needle, maybe you’re living that out now? No?
Well, I’d say it’s more a situation where billionaires in technology have killed journalism jobs. I would’ve definitely been at a publication if there had been one. It was a hard struggle last year to find a way to make ends meet and still write the stories I wrote for The New Republic. I don’t know if people know what freelance pays these days, but I wrote five major stories, and it got me about a month’s worth of rent in San Francisco. So, you don’t make your money off of writing. You write on the side, and you find other ways to survive. I think that we were talking earlier about technology taking jobs. Do you remember the San Jose Mercury News in 1998, compared to now?
I do.
So, there was a bit of a decimation. But I think the bigger problem is the economics of it, where any place you go — and I was at the San Francisco Examiner — it’s just layoffs all the time. It gets a bit depressing, and then the rules get tighter and tighter. There’s just constant panic and trying to remake whatever the publication is to meet some target that some new person has because the last person failed. I was actually going to leave journalism, but I found this story, and I felt like I had to tell this story. Maybe this is the last story I’ll tell as a journalist. So, I did it.
But fortunately, there are positive parts of technology. It doesn’t all have to be controlled by a handful of fascists. We deserve technology as well. We deserve technology that doesn’t come with the threat of losing our freedom, of losing our identity, of losing our way of life. Ironically, technology has created this system now where I can have direct subscribers and do my work and reach people on podcasts and on YouTube.
So, we’re all in this soup. My argument is not that technology is bad. I am an early adopter of everything. I just got a new e-bike. It’s that we don’t have to live under the boot of these people because they’re at some company that creates something. There should be more public ownership of some of these things as well.
Unpack some of that because, as you said, you are not a Luddite. In fact, you are embracing lots of technology and how you’re distributing this Nerd Reich message, this newsletter and podcast, and you’re using some AI in how the message is formulated and distributed, right? Like the imagery. Tell me about that.
Oh, well, I forbade that on the podcast. We’re using Adobe now; I use Shutterstock. I prefer real human images. There was a moment early on when I used a bit of OpenAI’s image generation. It was like a new toy, and I used it actually to depict the future that some of these guys want. Actually, Balaji Srinivasan had talked about this Gray Pride Parade, where all the techies would wear gray uniforms and march down Market Street in San Francisco with police and drones flying overhead. So, I literally plugged his phrase into OpenAI, into DALL-E, and it created this terrifying, horrific image that looked actually like Trump’s parade over the weekend.
But the more I have had conversations with people and talked about this stuff, the more uncomfortable I’ve become with using those tools. I think right now we’re seeing this phase of an AI art aesthetic that’s going to look really bad in a few years, especially when you think about the amount of theft and robbery taking place to create these little toys and the idea that it boils a swimming pool or something in order to generate it.
So, I have made it clear that I don’t want AI in my images. So I use Shutterstock. I think most people who are looking for the words, anyway. Actually, I’ve got a Nerd Reich art project underway, and I’m going to work with a real artist on it, even though people are like, “Just use AI.” I don’t think we can get to a point where we just use AI as long as it’s going to enrich these guys.
That’s the same reason I’m not on Substack. I started on Substack, but a lot of my readers were like, “How can you be on Substack? Andreessen Horowitz is one of the main investors in this.” At first, I was hesitant, and Substack made it really easy to start a newsletter and monetize it. But there are alternatives. There’s a nonprofit I use now, Ghost, and in fact, it takes a smaller percentage of your pay. I’ve had bigger growth on it. It really does depend on your content. You’re not going to become a big Substack writer with a crappy blog. If you’re a good writer, you can take it elsewhere.
So, I think we all have to navigate these relationships, and there’s no way to be perfect, just like we use fossil fuels. We eat products that might be harvested under unethical conditions, but we have to strive to be more conscious of our relationships with these technologies and to do better if we can.
Sounds like farm-to-table technology. So, okay, there are boundaries around what you’re willing to do. You axed out the AI stuff. That’s like being like, “Okay, I’m not going to import that meat from that far away and burn all those fossil fuels.” You’re still podcasting. You’re still using the internet, right? It’s technology.
What are you finding about the feedback that you’re getting? What’s driving the growth and the engagement on the platforms where people are hearing your thesis and the guests with whom you’re speaking?
I think people have been looking for a way to make sense of what’s happening. This has been a missing piece of the puzzle: “What’s happening with Silicon Valley?” It’s a rightward turn. It’s mating with MAGA and these weird ideas that we see coming out, like the Freedom City, taking Greenland and giving it to Praxis, which is a company founded by or funded by Sam Altman, Peter Thiel, and Marc Andreessen. People wonder why all this is happening. No one’s been telling a cohesive narrative in the mainstream press about why it’s happening. That’s very much been my focus.
Look, last year, I think to a lot of people this seemed like conspiratorial stuff, or like you said earlier, it seemed weird. Well, we live in weird times, and we’ve got some conspiratorial people on the scene. I think that the press is finally starting to catch up to what I was trying to say last year. If Kamala Harris had won, it would look like I had gone down a very strange rabbit hole, and it wouldn’t have been clear why, because this stuff would’ve been pushed back a few years. I think it still would’ve been relevant, but it would’ve been pushed back.
But even I didn’t expect it to accelerate this quickly. I didn’t think it’d be Elon Musk in the White House doing Curtis Yarvin’s “retire all government employees” plan. I didn’t think that Trump would already be pushing to build these freedom cities and take Greenland for the purpose of doing that. This has gone much faster and much further than I expected so far.
Now we’ve got people like Coinbase CEO Brian Armstrong openly talking about Bitcoin replacing the dollar as a global reserve currency, which is another thing I’ve been talking about for a couple of years that would have some dramatic effects on this country and its standing in the world. So, unfortunately, this stuff ended up being a lot more important than it should have been. I hope journalists at the big papers and publications with massive followings will start to tune in and tell people, because what I have found is that people really want to know.
Once you explain it to them, they understand it. They have the tools to understand what’s happening, and the democracy of information is necessary for citizens to make the right decisions. I don’t see why people are censoring an important part of this story. I spent my life as a journalist at establishment newspapers, working for establishment politicians. I’m a pretty solid evidence-based guy. I think that this generation of political editors at papers like The Washington Post and The New York Times should be known as the generation of failure for completely missing this story. The fact that they’re racing to do it now proves the point.
Well, let’s close on a hopeful note then. Paint for me — and get out your greens and yellows perhaps — the picture of how America, how society, and maybe even how tech works its way out of this situation. You alluded earlier to the non-billionaires rising up and pursuing a different end. How might that happen? What does it take?
Well, I think that this is going to take millions of people in the street on a regular basis. We have to get through the current crisis, and these tech oligarchs have to see that they’re playing a very dangerous game by cozying up to Trump and facilitating his fantasies of an authoritarian transformation of this country. Over time, though, I think we have to reckon with the fact that these people have enough money to be an ongoing threat and to try this again. So, I think it’s going to take an awareness of the role that billionaires play in our society and a political movement to demand leaders who are willing to address that problem.
What kind of leaders? During the pandemic, the Black Lives Matter Movement rose up, and there was this move toward… I’m not going to say exactly leaderless, but decentralized approaches to movements. I haven’t tested this out, so poke holes in it, if you will.
But the Republican Party and the right have become more centralized in Donald Trump than at any time, I would argue, in our lifetimes. Even Republicans who don’t agree with large portions of what he’s said, many of them are falling in line, and it’s led to this early ability of the Trump administration to make massive progress.
So, if you’re painting a picture of how things might swing the other way, is decentralization the answer? Or does there need to be the rise of a different charismatic figure, dangerous as those might be, to help people believe?
I don’t think it has to be one charismatic figure. I think it has to be a movement that speaks to the real needs of people. Like I said earlier, 60 percent of Americans can’t afford the basics. If the Democratic Party can’t find a way to make use of that, well, then I guess it’s lights out. You’ve got AOC and Bernie Sanders going around drawing tens of thousands of people in traditionally conservative areas. People are looking for leaders who speak to their basic core values and issues, and the Democratic Party just wants to play tag along with crypto. They want to be a lighter version of the Republican Party. I think that way lies doom.
I think especially with these younger people coming up today, they are not in the mood, especially after the next four years, for some half-baked, mealy-mouthed political party that tells them that the status quo is also the future. So, one way or another, we’re going to get sold a vision of that future, and it’s going to be this fascist tech dystopia, or it’s going to be something that serves the majority of people and preserves the ideas of freedom, democracy, and the rule of law. Again, if the Democratic Party can’t find a way to message that, then someone else will come along and do it for them.
Who?
Oh, I don’t know. I don’t think the leader has yet appeared, but people like Alexandria Ocasio-Cortez and Bernie Sanders have. And Zohran Mamdani in New York seems to be tapping into something very powerful. If the Democrats aren’t going to do it, then someone else will.
All right. Gil Duran, the podcast and the newsletter are The Nerd Reich. Is the book also called The Nerd Reich?
The Nerd Reich: Silicon Valley Fascism and the War on Global Democracy.
Now we’re early, but you’re writing it probably right before we got on here and right after, in all kinds of moments. How far out should people look for this to perhaps appear? Well, definitely, but how far out should people look for it to appear on shelves?
Oh, 2026, but if you want to keep up with the progress, it’s thenerdreich.com and it’s free.
Oh, there’s like a progress bar.
Well, I’m giving people some updates, and you can see where I’m going with stuff.
Gil, thank you for coming on Decoder.
Thank you.
Questions or comments about this episode? Hit us up at decoder@theverge.com. We really do read every email!
]]>Hello, and welcome to Decoder! This is Jon Fortt, CNBC journalist, cohost of Closing Bell: Overtime, and creator of the Fortt Knox streaming series on LinkedIn. I’m stepping in to guest-host a few episodes of Decoder this summer while he’s out on parental leave, and I’m very excited about what we’ve been working on.
For my first episode of Decoder, a show about how people make decisions, I wanted to talk to an expert. So I sat down with Cassie Kozyrkov, the founder and CEO of AI consultancy Kozyr. She’s also the former chief decision scientist at Google.
Listen to Decoder, a show hosted by The Verge’s Nilay Patel about big ideas — and other problems. Subscribe here!
For a long time, Cassie has studied the ins and outs of decision-making: not just decision frameworks but also the underlying social dynamics, psychology, and even, in some cases, the role that the human brain plays in how and why we make certain choices. This is an interdisciplinary field that Cassie calls decision intelligence, which mixes everything from statistics and data science to machine learning. Her expertise landed her a top adviser role at Google, where she spent nearly a decade helping the company make smarter use of data.
In recent years, her work has collided with artificial intelligence. As you’ll hear Cassie explain it, generative AI systems like ChatGPT are making it easier and cheaper than ever to get advice and analysis. But unless you have a clear vision of what it is you’re looking for, and what values underlie the decisions you make, all you’ll get back from AI is a lot of messy data.
So Cassie and I really dug into the science behind decision-making, how it intersects with what we’re seeing in the modern AI industry, and how her current work in AI consulting helps companies better understand how to use these tools to make smarter decisions that can’t just be outsourced to agents or chatbots.
I also wanted to learn a little bit about Cassie’s own decision-making frameworks and how she made some key decisions of her own, such as what to pursue in graduate school and why she decided to leave academia for Google and then strike out on her own just as the generative AI boom was really starting to kick off. This is a fun one, and I think you’re really going to like it.
Okay: decision scientist Cassie Kozyrkov. Here we go.
This transcript has been lightly edited for length and clarity.
Cassie Kozyrkov, welcome to Decoder. I’m going to welcome myself to Decoder too, because this isn’t my podcast. I’m just having a good time punching the buttons, but it’s going to be a lot of fun.
Yeah, it’s so great to be here with you, Jon. And I guess we two friends managed to sneak on and take over this podcast, so I’m really excited for the mischief we’ll cause here.
Let the mischief begin. So the former chief decision scientist at Google, I think, starts to frame what it is you’re good at, and we’re going to get into the implications for AI and leadership and technology and all that. But first, let’s just start with the basics. What’s so hard about making decisions?
Depends on the decision. It can be very easy to make a decision, and one of the things that I advise people is, unless you’re a student of decision-making, your number one rule should be to try to match the effort you put into the decision with what’s at stake in the decision. So, of course, if you’re a student, you can go and agonize over, “How would I apply a decision theoretic approach to choosing my sandwich at lunch?” But don’t be doing that in real life, right?
Slowing down, thinking carefully, and considering the hard decisions and doing your best by them is, again, for the important decisions that will touch your life. Or even, more critically, the lives of thousands, millions, billions of other people, which is something that we see with technology that scales.
It sounds like you’re saying, in part, knowing what’s at stake is one of the first tough things about making decisions.
Exactly. And knowing your priorities. So one of the things that I find really fascinating about what AI in the large language model chatbot sense today is doing is it’s making answers really cheap. And when answers become cheap, that means the question becomes really important. Because what used to happen with decision-making for, again, the big, thorny data-driven decisions, was a decision-maker might come up with something and then ask the data science team to work on it. And then by the time that team came back with an answer, it had been, well, a week if you were lucky, but it could have been six weeks, or six months.
In that time, though, you actually got the opportunity to think about what you’d asked, refine what it meant to you, and then maybe re-ask it. There was time for that shower thought, where you’re like, “Oh, man, I should not have phrased it that way.” But today, you can go and have AI attempt an answer for you, and you can get an answer really quickly.
If you’re used to just immediately running in the direction of your answer, you won’t think as much as you should about, “Well, how do I test if this is actually what I need and what’s good for me? What did I actually ask in the first place? What was the world model, if you like? What were the assumptions that went into this decision?” So it’s all about priorities. It’s all about knowing what’s important.
Even before we get there though, staying at the very basic level, how do people learn to make decisions? There’s the fundamental idea that if you touch a hot stove, you do it once and then you know not to do that again. But how does the wiring in our brain work to teach us to become decision-makers and develop our own processes for doing it?
Oh, I didn’t know that you were going to drag my neuroscience degree into this. It has been a while. I apologize to any actual practicing neuroscientists that I’m about to offend. But at least when I was in grad school, the models that we had for this said that you have your dopaminergic midbrain, which is a region that’s very important for movement and for executing some of what you would think of as the more instinctive behaviors, or those driven by basic rewards — like sugar, avoidance of pain, those kinds of rewards.
So you have what you might think of as an evolutionarily older structure. And isn’t it fascinating that movement and decision-making are similarly controlled in the brain? Is a movement a decision? Is taking an action the same thing as making a decision? We can get into that. And then there are other structures in the prefrontal cortex.
Typically, your ventromedial and dorsolateral prefrontal cortices will be involved in various kinds of what you would think of as effortful or slowed-down decisions — such as the difference between choosing a stock because, I don’t know, you feel as if you don’t even know why, and sitting down and actually running some numbers, doing some research, integrating all of that and having a good, long-think ponder as to what you should do.
So broadly speaking, different regions from different evolutionary stages play into decision-making. The prefrontal cortex is a little newer. But you have these systems — sometimes acting in a coordinated manner, sometimes a little in conflict — involved in decision-making. But what we also really cared about back in those days was moving away from the cartoonish take that you get in popular science, that you just have one region and it just does this one thing and it only does this thing.
Instead, it’s an entire network that is constantly taking in inputs and processing all of them. So, of course, memory would be involved in decision-making and, of course, the ability to imagine, which you would think of more as engaging your visual occipital cortices — that would definitely be involved in some way or other. So it’s a whole thing. It’s a whole network of activations that are implementing human decisions. To summarize this for you, Jon, neuroscientists have no idea how we make decisions. So that’s the funny conclusion, right?
What we can do is prod and pry and get some sense of it, but at the end of the day, the actual nitty-gritty of how humans make decisions is a mystery. What’s also really funny is humans think they know how they make decisions, but quite often you can plant a decision and then unbeknownst to your participants, as we call them in the studies — I’d say victims — unbeknownst to them, the decision was made for them all along. It was primed in some way. Certain inputs got in there.
They thought they made a decision, and then afterward you ask them, so why did you pick red and not blue? They will sing you this beautiful song, explaining how it was their grandmother’s favorite color or whatever it is. Meanwhile, the experimenter implanted that, and if you don’t believe me, go see a magic show. It’s the same principle, right? Stage magicians will plant decisions in their audiences so reliably, otherwise the show wouldn’t work. I’m always fascinated by how seriously we take our human ability to know and understand ourselves and feel as if we’ve got all this agency side by side with professional stage magicians entertaining crowds every day.
But it sounds to me like maybe what really drives decisions, and maybe this motion and movement region of the brain is part of it, is want — what we want. When we’re babies, when we’re toddlers, decisions are: Do I get up? Am I hungry? Do I cry? It’s basic stuff that has to do with mostly physical things, because we’re not intellectuals yet, I guess.
So you need to have a want or a goal in order for there to be a decision to be made, right? Whether we understand what our real motivation is or not, that’s a key ingredient, having some kind of want or goal in decision-making.
Well, it depends how you define it. So with all these terms, when you try to study decision-making in the social biological sciences, you’ll have to take a word, such as “decision,” which we use casually however we like, and then you’ll have to give it a little box that makes that definition more concrete. It’s just like saying: “let X equal…,” right? At the top of your page when you’re doing math, you can say let X equal the speed of light. Now, from now on, whenever I write X, it means the speed of light. And then for some other person’s paper, let X equal five, and then whenever they write X, it means five.
So similarly, we say, “Let decision equal…” and then we define it for the purposes. Typically, what decision analysts will say defines a decision — the way they do their “let decision equal…” at the top of their page — is they say that it is an irrevocable allocation of resources. Then it’s up to you to think about, again, how you want to define what it means for the allocation to be irrevocable, and what it means for the resources to be allocated at all.
Is this an act that a human must make? Is it an act that a system downstream of a human might make? And what are resources? Are resources just money, or could they include time? Or opportunity? For example, what if I choose to go through this door? Well, in this moment, in this universe right now, I didn’t choose to go through that door, and I can’t go back. So in that sense, absolutely every movement that we make is an irrevocable allocation of resources.
And in companies, if you’re Google, do you buy YouTube or not? I mean, that was a big decision back then. Do I hire this person or that person? If it’s a key employee role, that can have a huge impact on whether your company succeeds or fails. Do I invest in AI? Do I or don’t I adopt this technology at this stage?
Right, and you can choose how to frame that to make it definitionally irrevocable. If I hire Jon right now at this point in time, then I’m maybe giving up doing something else, such as eating my sandwich instead of going through all the paperwork of hiring Jon. So I could think that’s irrevocable. If I hire Jon, I might be able to fire Jon tomorrow and release whatever resources that I cared more about than time and current opportunity. So then I could treat that as I’m able to have a two-way door on this decision.
So really, it depends on how you want to frame it, and then the rest will somewhat follow in the math. A big piece of how we think about decision-making in psychology is to separate it into judgment and decision-making.
Judgment is separate from decision-making. Judgment comes in when you undertake all the effort of deciding how to decide. What does it actually mean for you to allocate your resources in a way without take-backsies? So it’s up to the decision-maker to think about that. What are we measuring? What’s important? How might we actually want to approach this decision?
Even saying something like, “This decision should be made by gut instinct rather than by effortful calculation,” is part of that judgment process. And then the decision-making process that follows, that is just riding the mathematical consequences of whatever judgment setup you made.
So speaking of setup, give me the typical setup. Why do clients hire you? What kinds of positions are they in where they’re like, “Okay, we need a decision scientist here”?
Well, typically, the big ones are those involving deployment of AI systems. How would you think about solving a problem with AI? That’s a big decision. Should I even put this AI system in place? I’m potentially going to have to gut whatever I’m already using. So if I’ve got some handcrafted system some software developers have already written for me, and I’m getting reasonably good results from that, well, I’m not just going to throw AI in there and hope for the best. Actually, in some situations you would do that, because you want to say, “I’m an AI company.” And so you want to default to putting the AI system in unless you get talked out of it.
But quite often it’s effortful, it’s expensive, and we want to make sure that it’s going to be good enough and right for that company’s situation. So how do we think about measuring that, and how do we think about the realities of building it so it has all the features that we would require in order to want to proceed. It’s a huge decision, this AI decision.
How much does a leader’s or a company’s values matter in that assessment?
Incredibly. I think that’s something that people really miss when it comes to what looks like data or math-y situations. Once we have that bit of math, it looks objective. It looks like “you start here, you end up there,” and there was only one right answer. What we forget is that that little math piece and that data piece and that code piece form a thin layer of objectivity in a big, fat subjectivity sandwich.
That first layer is: What’s even important enough to automate? What’s important enough to do this in the first place? What would I want to improve? In which direction do I want to steer my business? What matters to me? What matters to my customers? How do I want to change the world? These questions have no one right answer, and will need to be articulated clearly in order for the rest to make sense.
The companies tend to articulate those things through a mission statement. Very often, at least in my experience, those mission statements aren’t nearly detailed enough to guide the granular and deep series of events that AI is going to lead us down, no?
Absolutely, and this is a really important point that blossoms into the whole topic of how to think about decision delegation. So the first thing leaders need to realize is that when they are at the very top of the food chain in their organizations, they don’t have the time to be involved in very granular decisions. In fact, most of the job is figuring out how to delegate decision-making to everybody else, choosing whom to trust or what to trust if we’re going to start to delegate to automated systems, and then letting go of that decision.
So you don’t want to be asking the CEO about nitty-gritty topics around, let’s say, the cybersecurity pieces of the company’s shiny new AI system. But what the company needs to do as an organization is make sure that somebody in the project is thinking about all the components that need to be thought about, and that it’s all delegated to the right people. So part of my role then is asking a lot of questions about what’s important, who can do this, how do we put it all together, and how do we make sure that we’re not operating with any blind spots or missing any components.
How typically are clients ready to provide you with that information? Is that a conversation they’re used to having?
Again, we’ve come a long way, but for the longest time, as a civilization working with data, we’ve been fascinated by just being able to potentially do a thing even if we don’t know what it’s for. We thought, “Isn’t it cool that we can move this data? Isn’t it cool that we can pull patterns out of it? Isn’t it cool that we can store or collect it at scale?” All without actually asking ourselves, “Well, where are we going, and how are we going to use it?”
We are growing out of that painful, teething phase where everyone was like, “This is fun, and let’s do it for theory.” It’s kind of like saying, “Well, we’ve invented a wheel, and now we can invent a better wheel, and we can now make it into a tire and it can have rubber on it, but maybe it’s made from carbon fiber.”
Now we are moving into, “Okay, this thing enables movement, different investments in this thing enable different speeds of movement, but where do I want to go? Because if I want to go two yards over, then I don’t actually need the car, and I don’t need to be fascinated by it for its own sake.”
Whereas if what I really need to do is be in the adjacent city tomorrow, and I don’t currently have a car, well, then we’re also not going to talk about inventing it from scratch by hiring researchers. We’re not going to think about building it in-house. We’re going to ask, “Who can get you something that will get you there on time and on spec?” These conversations are new, but this is where we’re going. We have to.
It sounds like, and correct me if I’m wrong here, AI is going to help us a lot more with giving us facts and options and less with giving us values and goals.
I hope so. That is the hope, because when you take values and goals from AI, what you’re doing is taking an average from the internet, or perhaps in a system that has a little bit more logic running on top of it to direct its output, then you might be taking those values and goals from the engineers who designed that system. So it’s like saying, “If I’m going to use AI as my rough draft every time, that rough draft might be a little bit less me and a little bit more the average soup of culture.” If everyone starts doing that, then it’s certainly a kind of blending or averaging of our insights.
Perhaps you want that, but I think there’s still a lot of value in having people who are close to their problem areas, who are close to their businesses, who have individual expertise, to think a little bit before they begin, and to really frame what the question is rather than take it from the AI system.
So Jon, how this would go for you is, you might ask an AI system, “How do I live the best possible life?” And it’s going to give you an answer, and that answer is not going to fit you. That’s the thing. It’s going to fit the average Joe. What is or who is the average Joe, and how does that apply to you?
It’s going to go to Instagram, and it’s going to look at who’s got the most likes and followers, and then decide that those people have the best lives, and then take the attributes of those people — how they look, how they talk, the level of education they say they have — and say, well, here’s what you need to do to be like these people who, the data tells us, people think have the best lives. Is that a version of what you mean?
Something like that. More convoluted, because something that is worth realizing is that an advantage machines have over us is memory and attention, right? What I mean by this is if I flash 50 digits onscreen right now and then ask you to recall them, you’re going to have no idea. Then I can go back to those 50 and say, “Yeah, the machine remembered it for us this whole time. It is clearly better at memory than Jon is.”
Then we flash these things, and I say, “Quick, what’s the sum of these digits?” Again, difficult for you, but easy for a machine. So anything that fits in our heads as we discuss it is going to be a shortcut of what’s actually possible when you have memory and attention at scale. In other words, we’ve described this Instagram process that fits in our heads right now, but you should expect that whatever is actually going on with these systems is just too big for us to hold in there.
So sure, Instagram and some other sources and probably even some websites about how to live a good life applied to us, but it’s all kinds of things all jumbled together into something too complicated for us to understand what it is. But the important thing is it’s not tailored to us specifically, not without us putting in quite a lot of effort to feed in the information required for that tailoring, which I encourage us to do.
Certainly, understanding that advice is cheaper than ever. I will frame up whatever is interesting to me and give it to the system. Of course, I’ll remove the most confidential details, but I’ve asked all kinds of things about how I might, let’s say, improve looking at real estate given my particular situation and my particular tastes. I’ll get a very different answer than if I just say, “Well, how do I invest?” I’ve even improved silly things, like I discovered that I tie my shoelaces too tight. I had no idea, thank you, AI. I now have better technique for having feet that are less sore.
Did you discover through AI that you tie your shoelaces too tight?
Yeah, I went debugging. I wanted to try to figure out why my feet were sore. To help me diagnose this I gave the system a lot of information about me, such as when my feet were sore, what I was doing at the time, what shoes I was wearing. We went through a little debugging process: “Okay, first thing we’ll try is using a different shoelace-tying technique from the one that you have used, which was loop and then loosen a little bit.” I’m like, “Wow, now my feet don’t hurt. How awesome.”
So whatever it is that’s bugging you, you could go and try to debug it a little bit with AI, and just see what you get. Maybe it’s useful, maybe it isn’t. But if you simply give the system nothing and ask something like, “How do I become as healthy as possible?” You’ll probably not get any information about what to do with your shoelaces. You’re just going to get something from very averaged-out, smoothed-out soup.
In order to get something useful, you have to bring something to the table. You have to know what’s important to you. You have to know what you’re trying to achieve. Sometimes, because your feet hurt right now, it’s important to you right now, and you’re kind of reacting the way that I was. I probably wouldn’t ask any proactive questions about my shoelaces, but sometimes what really helps is stepping back and saying, “Well, what is there in my life right now that could be better? And then why not ask for advice?”
AI makes advice cheaper than ever before. That’s the big revolution. It also helps with all kinds of nuanced advice, like pulling out some of your decision framing — “help me frame my ideas, help me ask myself the questions that would be important for getting through some or other decision.”
Where are most people making the biggest mistakes, or where do they have the biggest blind spots when it comes to decision-making? Is it asking the right questions? Is it deciding what they want? What would you say it is?
One is not getting in touch with their priorities. Again, when you’re not in touch with your priorities, anyone’s advice, even from the best person, could be bad for you. And this is something that also applies to the AI sphere. If we aren’t in touch with what we need and want, and we just ask the soup to give us back some average first draft and then we follow it to a T, what are the chances it will actually fit us very well?
Let me put a specific situation on this, because I’m the parent of a soon to be 17-year-old, second- semester junior in high school who’s getting ready to apply to colleges, and this is one of the first major decisions that young people make. It’s two-sided, which is really fraught because you’re deciding where to apply, and the schools are deciding who to let in.
It seems like that applies here too, because some people are going to apply to a school because their parents went there, or because it’s an Ivy League. So through that framing, can you talk about the types of mistakes that people make from the perspective of a high schooler applying to college?
I’m going to keep trying to tie this back a little bit to what we can learn about our own interactions with LLMs, because I think that’s helpful for people in this brave new world of how we use these AI tools. So again, we have three stages, approximately: you have to figure out what’s worth asking, what’s worth doing, and then you need to get some advice or technical help, some execution bit — that might be you, it might be the LLM, or might be your dad giving you great advice. And then when you receive the advice, you need to have a moment in which you evaluate if it’s actually good for you. Do I follow this, and is it good advice or bad advice; and do I implement it and do I execute it? It’s these three stages.
So the first one, the least comfortable one, is asking yourself, “Well, how do I actually frame what I’m asking?” So to apply it specifically to your kid, it would be what is the purpose of college for me? Why am I even asking this question? What am I imagining? What are some things I might get out of this college versus that college? What would make each different for me? What are my priorities? Why are these priorities my priorities?
These are questions where if you are not in tune with your answers, what will happen is you will receive advice from wherever — from the culture, from the internet, from your dad — and you are likely to end up doing what is good for them rather than what’s good for you, all from not asking yourself enough preliminary questions.
It’s like the magician scenario. They feed you an answer subconsciously, and you end up spitting that back without even realizing it’s not what you really wanted.
Your dad might say, as my dad did, that economics is a really interesting and cool thing to study. This kind of went into my head when I was maybe 13 years old, and it kept knocking around in there. So that’s how I found myself in economics classes and ended up majoring in economics at the University of Chicago.
Actually, it’s not always true that what your parents put in there makes its way out, of course, because both of my parents were physicists, and I very quickly discovered that I wanted nothing to do with physics because of the constant parental “you should do better in physics, and you should take more physics classes.” And then, of course, after I rebelled in college, I ended up in grad school taking physics in my neuroscience program. So there you go, it comes around full circle.
But the point is that you have to know what you want, what’s important to you, and really be in touch with this so that you’re not pushed around by other people’s advice and even what seems like the best advice — and this is important — even the best advice could be bad for you. So when you think someone is competent and capable, and so I should absolutely take their advice, that’s a mistake. Because if what’s important to them is not what’s important to you, and you haven’t communicated clearly to them or they don’t have your best interests at heart, then this intelligent advice is going to lead you off a cliff. I just want to say that with AI, it could be a performance system, but if you haven’t given it the context to help you, it’s not going to help you.
The AI point is where I wanted to go, and I think you’ve talked about this in the past too. AI presents itself as very competent and very certain that it’s correct with very little variation that I’ve seen based on the actual output. It’s not saying, “Eh, I’m not totally sure, but I think this when it’s about to hallucinate,” versus, “Oh, here’s the answer when it’s absolutely right.” It’s sure almost 100 percent of the time.
So that’s a design choice. Whenever you have actual probabilistic stages in your AI output, you can instead surface something to do with confidence, and this is achievable in many different ways. For some models, even some of the basic models, what happens there is you get a probability first, and then that converts into action or output that the user sees for other situations.
For example, in the backend, you could run that system multiple times, and you could ask it, “What is two plus two?” And then in the backend you could run this 100 times, and you discover that 99 out of 100 times, the answer comes back with a four in it. You could then show some kind of confidence around this being at least what the cultural soup thinks the answer is, right?
Let’s ask, “What is the capital of Australia?” If the cultural soup says over and over that it’s Melbourne, which it isn’t, or that it’s Sydney, which it also isn’t — for those for whom that’s a surprise, Canberra is the right answer. But if enough of the cultural soup says Sydney, and we’re only sourcing from the cultural soup, and we’re not kicking in some extra logic to go specifically to Wikipedia and only draw from that, then you would get the wrong answer with high confidence. But it would be possible to score that confidence.
In situations where the cultural soup isn’t so sure of something, then you would have a variety of different responses coming back, being averaged, and then you could say, “Well, the thing I’m showing you right now is only showing up in 20 percent of cases, or in 10 percent of cases.” Or you could even give a breakdown: “This is the modal answer, the most common answer, and then these are some answers that also show up.” Not to do this is very much a user-experience design decision plus a compute and hardware decision.
It’s also a cultural issue, isn’t it?
It is a cultural issue.
It seems to me that in the US, and maybe this is true of a lot of Western cultures, we value confidence, and we value certainty even more sometimes than we value correctness.
There’s this culture in business where we sort of expect right down to the moment when a company fails for the CEO to say, “I’m really confident that we’re going to make this work,” because people want to follow somebody who’s confident, and then the next day they say, “Ah, well, I failed, it didn’t work out.” We kind of accept that and think, “Oh, well, they gave it their best, and they were really confident.”
It’s the same in sports, right? The team’s down three games to one in a best of seven series, and the team that’s only got one win, they’re like, “Oh, we’re really confident we can win.” Well, really, the statistics say you’re probably not going to win, but we know that they have to be confident if they’re going to have any chance. So we accept that, and in a way we’ve created AI in our own image in that respect.
Well, we’ve certainly created AI in our own image. There’s a lot of user-experience design that goes into that, but I don’t think it’s an inevitable thing. I know that on the one hand, there is this concept of the fluency heuristic. So a person or system that appears more fluent, with less hesitation, less uncertainty, is perceived as more trustworthy. This research has been done; it’s old research in psychology.
Now you see that the fluency heuristic is absolutely hackable, because if you forget that you’re dealing with a computer system that has some advantages, like memory, attention, and, well, fluency, you could just very quickly rattle off a bunch of nonsense you don’t understand. And that lands on the user or the listener as competence, and so translates as more trustworthy. So our fluency heuristic is absolutely hackable by machine systems. It’s much harder for me to hack it as a human. Though we do have artists who manage it very well, it’s very difficult to speak fluently on a topic that you have no idea about and don’t know how any of the words go together. That only works if that’s the blind leading the blind, where no one else in the room knows how any of it works either.
On the other hand, I’ll say, at least for me, I think it has helped me in my career to form a reputation that, well, I say it like it is, and so I’m not going to pretend I don’t know a thing when I don’t know it. You asked me about neuroscience, and I told you that it’s been a long time since my graduate degree. Maybe we should adjust what I’m saying, right? I do that. That is not for all markets. Let’s just say many would think, “She has no idea what she’s talking about. Maybe we shouldn’t do business with her,” but for sure, there’s still value in my approach, and I’ve definitely found it’s helped me to become battle-bested and trustworthy.
That said, when it comes to designing AI systems, that stuttering lack of confidence would not create a great user experience. But similarly, some of the things that I talked about here would be expensive compute-wise. What I see a lot in the AI industry is that we have business people thinking that something is not technologically possible because it is not being given to users, and particularly not at scale, or even offered to businesses. Quite often, it is very much technologically possible. It’s just not profitable to offer that feature. There is no good business case. There’s no sign that users will respond to it in a way that will make it worth it.
So when I’m talking about running something 100 times and then outputting something like a confidence score, you would have some decision-making around whether it is 100, 10, or 1,000; and this depends on a slew of factors, which, of course, we could get into if that’s the problem you as a business are solving. But when you just look at it on the surface, I’m saying essentially 100 times more compute, right? Run this thing 100 times instead of once, and for what? Will the users respond to it? Will the business care about it? Yeah, frequently you’d be amazed at what’s already possible. Agents like [OpenAI’s] Operator, [Anthropic’s] Claude Computer Use, [Google’s] Project Mariner, all these things, they are underperforming, relative to where they could be performing, on purpose because it is expensive to run them well. So it will be very exciting when businesses and users are ready to pay more for these capabilities.
So back up for me now, because you left Google about two years ago, a little less than that. You were there for about 10 years, and long before the OpenAI and ChatGPT wave of AI enthusiasm had swept across the globe. But you were working on some of this stuff. So I want to understand both the work at Google and what led you there.
I think you said that your dad first mentioned economics to you when you were 13, and that sounds really young, but I think you started college a couple of years later. So you were actually on your way to those studies at the time. What made you decide to go to college that early and what was motivating you?
One of the things we don’t talk about enough is that knowing what motivates someone tells you more about that person than pretty much anything else could. Because if you’re just observing the outcomes, and you’re having to make your own inferences about how they got there, what they did, why they did it, particularly with survivorship bias occurring, it might look like they’re such total heroes. Then you look at their actual decision process, and that may tell you something very different, or you may think someone’s not very successful without realizing that they’re optimizing for a very different thing from you. This is all a very long way of saying that — I’m glad we’re friends, Jon, because I’ll go for it — but it’s always just such a private question. But yeah, why did I go to college so young? Honestly, it was because I had skipped grades in elementary school.
The reason I skipped grades in elementary school was because I came home — I was nine years old or so — and informed my mother that I wanted to do this. I cannot remember why. For the life of me, I don’t know. I was doing something on a nine-year-old’s whim, and skipping grades wasn’t a done thing in South Africa where I was growing up. So my parents had to really battle with the school and even the department of education to allow it. So there I was, getting to high school at 12, and I actually really enjoyed being younger. Okay, you get bullied a little bit, but I enjoyed it. I enjoyed seeing that you could learn a lot, and I wasn’t intellectualizing it the way I am right now, but you could learn a lot from people who were older than you.
They can kind of push you, and I’m a huge believer in just the act of being surrounded by people who will push you, which is maybe my biggest argument for why college still makes sense in the AI era. Just go be in a place where everyone’s on a journey of self-improvement. So I learned this and ended up making friends with 12th-graders when I was 13, and then at 14, they were all out already and in college. And I had spent most of my time with these older kids, and now I’m stuck, and I basically want my friends back. So that is why I went so young. It was 100 percent just a teenager being driven by being a social animal and wanting to be around my peer group, which…
But be fair to yourself. It sounds as if you just wanted to see how fast the car could go, right? That’s part of what it was at nine. You realized that you were capable of bigger challenges than the ones you had been given. So you were kind of like, “Well, let’s see.” And then you went and you saw that you were actually able to handle that, the intellectual part. People probably said, “Oh, but the social part would be hard.” But “Hey, I got friends who are seniors. That part’s working too. Well, let’s see if I can actually drive this at college speed.” That was part of it, right?
I am so easy to manipulate with the words, “You can’t do X.” So easy to manipulate. I’m like, “No, let me show you. I love a challenge. Let’s get this thing done.” So yeah, I think you’re right in your assessment.
So then you went on to do graduate work, after the University of Chicago, to study neuroscience, with some economics in there too?
So I actually went to Duke for neuroeconomics. That was the field. You know how there’s macroeconomics and microeconomics? Well, this was like nano-picoeconomics.This was about how the brain implements decision-making. So, of course, the courses involve experimental microeconomics. That was part of it, but this was from the psychology and neuroscience departments. So it’s technically a graduate degree in psychology and neuroscience with a focus on the neuroscience of decision-making, which is called neuroeconomics.
I also went to grad school twice, which is definitive proof that I’m a bad decision-maker, in case anyone was going to think that I personally am a good one. I’ve just got the technique, folks. I’ll advise you. But I went to grad school twice, and I’m just kidding. It was actually good for me to go to grad school twice, and my second time was for mathematical statistics. My undergraduate work was economics and statistics. So then I went for math statistics, where I did a lot of what we called back then machine learning, what we would call AI today.
How many PhDs were involved there?
[Laughs] No PhDs were harmed in the making of this person.
Okay, but studying both of those disciplines. What were you going to do with that?
So coming back to college, where I was taking courses around decision-making, despite having been an economics and statistics major. I got a taste for this. So I’ll tell you why I was in the stats major. The stats major happened because at about age eight or nine, just before this jumping of grades, I discovered the most beautiful thing in the world, which everybody knows is spreadsheets. That was for me the most gorgeous thing. Maybe it’s the librarian’s urge to put order into chaos.
So I had this gemstone collection. Its entire purpose was to give me another row for my spreadsheet. That was the whole thing. I get an amethyst, I could be like, Oh, it is purple, and how hard is it? And it’s translucent. And I still find, though I have no business doing it, that the act of data entry with a nice glass of wine is just such a soothing thing to do.
So I had been playing with data. Once you start collecting it, you also find that you start manipulating it. You start to have these urges like, “Oh, I wonder if I could get the data of all my files on my computer all into a spreadsheet. Well, let me figure out how to do that.” And then you learn a little bit of coding. So I just got all these data skills for free, and I thought data was really pretty. So I thought stats would be my easy A. Little did I know that it’s actually philosophy, and the philosophy bits are always the bits that should kick your butt or you’re missing the point. But of course, manipulating the data bits was super-duper easy. Statistics, I realized as I began to soak in the philosophy, is the discipline of changing your mind under uncertainty.
Economics is the discipline of scarcity, and the allocation of scarce resources. And even if money is not scarce, something is always scarce. People are mortal, time is scarce. So asking the question, “How are you going to make allocations, or what you might call decisions?” got in there through economics. Questions like “how to change your mind and what is your mind set to do. What actions are on the table? What would it take to talk you out of it?
I started asking these questions, and then how does this actually work in the human animal, and how could it work better? These questions came in through the psychology and neuroscience side of my studies. So I was studying decision-making from every perspective, and I was hoarding. So here as well, did I know what career I was going to have? I was actively discouraged from doing this. When I was at the University of Chicago, even at that liberal arts place, my undergraduate adviser said, “I have no idea what job you think you’re going to get with all this stuff.”
I said, “That’s okay, I’m learning. I think this is kind of important.” I hadn’t articulated back then what I’ll say now, which is that data is pretty, but there’s no “why” in data. The why comes from the decision-maker, right? The purpose has to come from people. It’s either your own purpose or the purpose of the people whom you represent, and that is what gives direction to all the rest of it. So [it’s] just studying data where it feels like there’s a right answer because the professor set the problem up so that there’s a right answer. If they had set it up differently, there could have been different answers.
Realizing that the setup has infinite choices, that is what gives data its why, and its meaning. That is the decision piece. That’s the most important thing I think any of us could spend our time on. Though we all do spend our time on it and do approach it from different lenses.
So then why Google? Why did you promise yourself you wouldn’t work for a company for more than 10 years?
Well, we’re really getting into all the things. So Google is a funny one, and now I’ll definitely say some things that I don’t think I’ve said on any podcasts. But the true story of that is that I was in a math stat PhD program, and what I didn’t know was that my adviser — this was at North Carolina State — had just taken an offer at Berkeley, where he could not bring any of his students along with him. That was a pretty bad thing for me, in the middle of my PhD.
Now, separate from this going on that I had no idea about, I take Halloween pretty seriously. It’s my thing. At Kozyr, it’s a work holiday, so people can enjoy Halloween properly if they want to. And I had come on Halloween morning dressed as a punch card as one does with proper Fortran to print happy Halloween as one does, and a Googler was giving a talk, and I was sitting in that audience, the only person in costume, because everyone else is lame.
Let that go on the record. My former classmates should have been in costume, but we can still be friends. And so at 9AM, I’m dressed like this. The Googler lady talking to the head of the department is like, “Who’s that grad student who was dressed as a punch card?” The head of the department, not having seen me, still said, “Oh, that’s probably Cassie. Last year she was dressed as a Sigma field,” just from measure theory. So I was being a huge nerd. The Googler thought “culture fit,” 100 percent, let’s get her application in.
And so the application was just for a summer internship, which seemed like a harmless thing to do. Sure, let’s try it. It’s an adventure. It’s Google. Then as I was signing up for it, my adviser was like, “This is a very good thing for you. You shouldn’t even hesitate. Don’t be asking me if I want you here doing summer research. Definitely go to Google. You can finish your PhD there. Go to Google.” And the rest is history. So a much, much better option than having to restart and refigure things with a new adviser.
How did you end up becoming this translator between the data people and the decision- makers?
The role that I ended up getting at Google, the formal internship name, was decision-support intern. I thought to myself, “We’ll figure out the support, and we’ll figure out the intern.” But decision, this is what I’ve been training for my whole life. The team that I was in was like a SWAT team for data-driven-decision making. It was very, very close to Google’s primary revenue. So this was a no-messing-around team of statisticians that called itself decision support. It was hardcore statistics flavored with data science, and it also had a very hardcore engineering group — it was a very big group. I learned a lot there.
I applied to potentially stay in the same group for a full-time role with strong prompting from my PhD adviser, and I thought I was going to join that group. A tangential thing happened, which is that I took a weekend in New York City before going to Mountain View, which is where I picked out my apartment. I thought I was going to join this group. I was really, really excited to be surrounded by deep experts in what I cared about. These experts were actually working more on the data side of things because what the decisions are and how we approach them are so regimented in that part of Google. But I took this trip to New York City, and I realized, and this was one of the biggest gut-punch decision-making moments for me. I realized I’m making a terrible mistake, that if I go there, I will just not enjoy my life as much as if I go to New York City.
So there was so much instinct, there was so much, “Oh, no, I should actually really reevaluate what I’m doing. Am I going to enjoy living in Mountain View?” I was just so set on getting the offer that I hadn’t done what I really should have done, which was to evaluate my priorities properly. So the first thing I did was I called the recruiter and I said, “Whoa, whoa, whoa, whoa. Can I get a role in New York City instead? It doesn’t matter which team. Is there something we can find for me to do here?” So I joined the New York office instead. Very, very different projects, very, very different group. And there I realized that not all of Google had this regimented approach to decision-making. There is so much translation, even at a place like Google, that’s necessary for products that are less close to the revenue stream.
So then there has to be a lot more conversation about why and how to do resource allocation, and who’s even in charge there, right? Things that when you’re moving billions around at the click of a mouse, you tend to have those questions answered. But in these other parts of Google, there was so much more color in how you could approach it, and such a big chasm between the people tasked with that and any of the data or engineering or data science efforts we might have.
So to really try to fill that gap — to try to put a bridge on it, so that things could be useful – I worked way more than my formal job said I should to try to build infrastructure. I built early statistical consulting, because that wasn’t there. You couldn’t just go ask a statistician who’d sit down with you and talk through what your project was going to be.
I convinced people to offer their 20 percent time, stats people by specialization, to offer their support on projects that were not their own project, to put some structure to this, and made resources and courses for decision-makers for how to think about dealing with data folk. I really tried to bring these two areas together, and eventually it became my job. But for the longest time, it wasn’t. Sometimes I faced questions. What are you? Who are you? Why are you actually doing what you’re doing? But just seeing that things could be made more effective, and kinder, for the experts who were going to work on poorly specified problems unless you specified the problems well first, was motivating, so that’s why I did it.
Trying to tie this all together, it sounds like that values and goals piece, and the philosophy element you talked about in school as being important, were coming back into play versus just focusing on the external expectation, like going to work for Google, of course, you’re going to go to Mountain View. That’s where the power is. That’s where the data people go, and you’re smart enough to be with the data people.
So if you’re going to run the car as fast as possible, you’re going to go over there, but you made a different kind of decision than perhaps the nine-year-old Cassie made. You stepped back and said, Wait a minute, what’s going to be best for me? And how can I work within that while pulling in some of this other information?
Yeah, for sure. I think that something that we can say to your 17-year-old is that it’s okay. It’s okay if it’s difficult when you’re young to take stock of what you actually are. You’re not formed yet, and maybe it’s okay to let the wind take you a little bit, particularly when you have a great dad who’s going to give you great advice. But it would be good if you can eventually mature into more of a habit of saying, “Well, I’m not the average Joe, so what do I actually want?” And working for what is thought of as — I don’t want to offend any internal Googlers — but they did have a reputation for being the top teams.
If you wanted to be number one and then number one again and number one some more times, that would’ve been the way to do it. But again, maybe it’s worth having something else that you optimize for in life. And I, as it turns out, I’m a theater kid, a lifelong theater kid. I’m an absolute nerd of theater. I’m going to London for just a few days in two weeks, and I’m seeing every evening show and matinee. I’m just going to hoard as much theater as I can for the soul. And so living in New York City was going to be just a better fit, not only for theater but for so much more that that city has to offer.
Having lived in both Silicon Valley and the New York area, I promise you that yes, the theater is far better in New York.
I mean, I went to all the plays in Silicon Valley as well, and I did my homework. I knew what I was getting into or out of. But yeah, it takes practice and skill to know that some of those questions are even questions worth asking. And I’ve developed that practice and skill from originally knowing how to do it to help others, having studied it formally, being book smart about it. These are the questions you ask. This is the order you ask them in. It’s something else to turn that on yourself and ask yourself the hard questions, that book smartness isn’t enough for that.
That’s good advice for all of us, whether we’re running businesses or just trying to figure out life, we’ve all got decisions to make. Cassie Kozyrkov, founder and CEO of Kozyr, former chief decision scientist at Google. Thanks for joining me on this episode of Decoder.
Thanks for having me, Jon.
Questions or comments about this episode? Hit us up at decoder@theverge.com. We really do read every email!
]]>